Test Report: Docker_Linux 17363

                    
                      9401f4c578044658a0ecc50e70738aa1fc99eff9:2023-10-05:31314
                    
                

Test fail (2/322)

Order failed test Duration
329 TestStartStop/group/old-k8s-version/serial/FirstStart 581.47
384 TestStartStop/group/old-k8s-version/serial/DeployApp 483.86
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (581.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-330869 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-330869 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: exit status 80 (9m39.788985919s)

                                                
                                                
-- stdout --
	* [old-k8s-version-330869] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node old-k8s-version-330869 in cluster old-k8s-version-330869
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Verifying Kubernetes components...
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:36:47.919687  848852 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:36:47.919964  848852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:36:47.919973  848852 out.go:309] Setting ErrFile to fd 2...
	I1005 20:36:47.919978  848852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:36:47.920173  848852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-491115/.minikube/bin
	I1005 20:36:47.920814  848852 out.go:303] Setting JSON to false
	I1005 20:36:47.923556  848852 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8356,"bootTime":1696529852,"procs":904,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:36:47.923650  848852 start.go:138] virtualization: kvm guest
	I1005 20:36:47.925906  848852 out.go:177] * [old-k8s-version-330869] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:36:47.927990  848852 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:36:47.929419  848852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:36:47.928012  848852 notify.go:220] Checking for updates...
	I1005 20:36:47.932014  848852 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig
	I1005 20:36:47.933550  848852 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube
	I1005 20:36:47.934951  848852 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:36:47.936287  848852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:36:47.938025  848852 config.go:182] Loaded profile config "bridge-264029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1005 20:36:47.938137  848852 config.go:182] Loaded profile config "flannel-264029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1005 20:36:47.938237  848852 config.go:182] Loaded profile config "kubenet-264029": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1005 20:36:47.938345  848852 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:36:47.964665  848852 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:36:47.964798  848852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:36:48.025439  848852 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:66 SystemTime:2023-10-05 20:36:48.015397239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:36:48.025552  848852 docker.go:294] overlay module found
	I1005 20:36:48.027633  848852 out.go:177] * Using the docker driver based on user configuration
	I1005 20:36:48.028989  848852 start.go:298] selected driver: docker
	I1005 20:36:48.029005  848852 start.go:902] validating driver "docker" against <nil>
	I1005 20:36:48.029019  848852 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:36:48.030024  848852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:36:48.085626  848852 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:66 SystemTime:2023-10-05 20:36:48.075813573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:36:48.085813  848852 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 20:36:48.086021  848852 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1005 20:36:48.088133  848852 out.go:177] * Using Docker driver with root privileges
	I1005 20:36:48.089876  848852 cni.go:84] Creating CNI manager for ""
	I1005 20:36:48.089915  848852 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1005 20:36:48.089929  848852 start_flags.go:321] config:
	{Name:old-k8s-version-330869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-330869 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:36:48.093801  848852 out.go:177] * Starting control plane node old-k8s-version-330869 in cluster old-k8s-version-330869
	I1005 20:36:48.095264  848852 cache.go:122] Beginning downloading kic base image for docker with docker
	I1005 20:36:48.096753  848852 out.go:177] * Pulling base image ...
	I1005 20:36:48.098129  848852 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1005 20:36:48.098186  848852 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1005 20:36:48.098212  848852 cache.go:57] Caching tarball of preloaded images
	I1005 20:36:48.098238  848852 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 20:36:48.098335  848852 preload.go:174] Found /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1005 20:36:48.098350  848852 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1005 20:36:48.098477  848852 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/config.json ...
	I1005 20:36:48.098504  848852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/config.json: {Name:mk99752faf0bffc70eb01d982f9c37d9a054b90f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:36:48.116009  848852 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1005 20:36:48.116035  848852 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1005 20:36:48.116061  848852 cache.go:195] Successfully downloaded all kic artifacts
	I1005 20:36:48.116099  848852 start.go:365] acquiring machines lock for old-k8s-version-330869: {Name:mk380d306e21968d92a9ebd5eb2e08ba9e79c051 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:36:48.116229  848852 start.go:369] acquired machines lock for "old-k8s-version-330869" in 94.435µs
	I1005 20:36:48.116272  848852 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-330869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-330869 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1005 20:36:48.116379  848852 start.go:125] createHost starting for "" (driver="docker")
	I1005 20:36:48.118693  848852 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1005 20:36:48.118976  848852 start.go:159] libmachine.API.Create for "old-k8s-version-330869" (driver="docker")
	I1005 20:36:48.119020  848852 client.go:168] LocalClient.Create starting
	I1005 20:36:48.119112  848852 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem
	I1005 20:36:48.119146  848852 main.go:141] libmachine: Decoding PEM data...
	I1005 20:36:48.119164  848852 main.go:141] libmachine: Parsing certificate...
	I1005 20:36:48.119213  848852 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem
	I1005 20:36:48.119241  848852 main.go:141] libmachine: Decoding PEM data...
	I1005 20:36:48.119252  848852 main.go:141] libmachine: Parsing certificate...
	I1005 20:36:48.120099  848852 cli_runner.go:164] Run: docker network inspect old-k8s-version-330869 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1005 20:36:48.137596  848852 cli_runner.go:211] docker network inspect old-k8s-version-330869 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1005 20:36:48.137694  848852 network_create.go:281] running [docker network inspect old-k8s-version-330869] to gather additional debugging logs...
	I1005 20:36:48.137717  848852 cli_runner.go:164] Run: docker network inspect old-k8s-version-330869
	W1005 20:36:48.154952  848852 cli_runner.go:211] docker network inspect old-k8s-version-330869 returned with exit code 1
	I1005 20:36:48.154989  848852 network_create.go:284] error running [docker network inspect old-k8s-version-330869]: docker network inspect old-k8s-version-330869: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-330869 not found
	I1005 20:36:48.155024  848852 network_create.go:286] output of [docker network inspect old-k8s-version-330869]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-330869 not found
	
	** /stderr **
	I1005 20:36:48.155170  848852 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 20:36:48.173321  848852 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d49f16ce6477 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:89:e2:2f:34} reservation:<nil>}
	I1005 20:36:48.174214  848852 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cd43b43b5fb6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:70:71:c4:9f} reservation:<nil>}
	I1005 20:36:48.174897  848852 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ec7c14bb7816 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:05:92:77:e8} reservation:<nil>}
	I1005 20:36:48.175632  848852 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c93df026c753 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:48:da:f3:d2} reservation:<nil>}
	I1005 20:36:48.176528  848852 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f16cc0}
	I1005 20:36:48.176554  848852 network_create.go:124] attempt to create docker network old-k8s-version-330869 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1005 20:36:48.176618  848852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-330869 old-k8s-version-330869
	I1005 20:36:48.234912  848852 network_create.go:108] docker network old-k8s-version-330869 192.168.85.0/24 created
	I1005 20:36:48.234958  848852 kic.go:117] calculated static IP "192.168.85.2" for the "old-k8s-version-330869" container
	I1005 20:36:48.235048  848852 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1005 20:36:48.252099  848852 cli_runner.go:164] Run: docker volume create old-k8s-version-330869 --label name.minikube.sigs.k8s.io=old-k8s-version-330869 --label created_by.minikube.sigs.k8s.io=true
	I1005 20:36:48.270352  848852 oci.go:103] Successfully created a docker volume old-k8s-version-330869
	I1005 20:36:48.270463  848852 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-330869-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-330869 --entrypoint /usr/bin/test -v old-k8s-version-330869:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1005 20:36:48.791875  848852 oci.go:107] Successfully prepared a docker volume old-k8s-version-330869
	I1005 20:36:48.791930  848852 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1005 20:36:48.791970  848852 kic.go:190] Starting extracting preloaded images to volume ...
	I1005 20:36:48.792070  848852 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-330869:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1005 20:36:53.585436  848852 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-330869:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.793290555s)
	I1005 20:36:53.585466  848852 kic.go:199] duration metric: took 4.793497 seconds to extract preloaded images to volume
	W1005 20:36:53.585577  848852 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1005 20:36:53.585705  848852 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1005 20:36:53.688434  848852 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-330869 --name old-k8s-version-330869 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-330869 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-330869 --network old-k8s-version-330869 --ip 192.168.85.2 --volume old-k8s-version-330869:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1005 20:36:54.092328  848852 cli_runner.go:164] Run: docker container inspect old-k8s-version-330869 --format={{.State.Running}}
	I1005 20:36:54.115472  848852 cli_runner.go:164] Run: docker container inspect old-k8s-version-330869 --format={{.State.Status}}
	I1005 20:36:54.146432  848852 cli_runner.go:164] Run: docker exec old-k8s-version-330869 stat /var/lib/dpkg/alternatives/iptables
	I1005 20:36:54.222564  848852 oci.go:144] the created container "old-k8s-version-330869" has a running status.
	I1005 20:36:54.222600  848852 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa...
	I1005 20:36:54.393461  848852 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1005 20:36:54.417158  848852 cli_runner.go:164] Run: docker container inspect old-k8s-version-330869 --format={{.State.Status}}
	I1005 20:36:54.450025  848852 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1005 20:36:54.450055  848852 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-330869 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1005 20:36:54.538711  848852 cli_runner.go:164] Run: docker container inspect old-k8s-version-330869 --format={{.State.Status}}
	I1005 20:36:54.562318  848852 machine.go:88] provisioning docker machine ...
	I1005 20:36:54.562370  848852 ubuntu.go:169] provisioning hostname "old-k8s-version-330869"
	I1005 20:36:54.562434  848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
	I1005 20:36:54.595826  848852 main.go:141] libmachine: Using SSH client type: native
	I1005 20:36:54.596278  848852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1005 20:36:54.596304  848852 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-330869 && echo "old-k8s-version-330869" | sudo tee /etc/hostname
	I1005 20:36:54.597029  848852 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41130->127.0.0.1:33383: read: connection reset by peer
	I1005 20:36:57.757089  848852 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-330869
	
	I1005 20:36:57.757178  848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
	I1005 20:36:57.777649  848852 main.go:141] libmachine: Using SSH client type: native
	I1005 20:36:57.777978  848852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1005 20:36:57.778004  848852 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-330869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-330869/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-330869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 20:36:57.913764  848852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 20:36:57.913805  848852 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-491115/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-491115/.minikube}
	I1005 20:36:57.913838  848852 ubuntu.go:177] setting up certificates
	I1005 20:36:57.913858  848852 provision.go:83] configureAuth start
	I1005 20:36:57.913935  848852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-330869
	I1005 20:36:57.934828  848852 provision.go:138] copyHostCerts
	I1005 20:36:57.934885  848852 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem, removing ...
	I1005 20:36:57.934893  848852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem
	I1005 20:36:57.934972  848852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem (1123 bytes)
	I1005 20:36:57.935086  848852 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem, removing ...
	I1005 20:36:57.935101  848852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem
	I1005 20:36:57.935139  848852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem (1679 bytes)
	I1005 20:36:57.935276  848852 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem, removing ...
	I1005 20:36:57.935288  848852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem
	I1005 20:36:57.935324  848852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem (1082 bytes)
	I1005 20:36:57.935419  848852 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-330869 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-330869]
	I1005 20:36:58.024520  848852 provision.go:172] copyRemoteCerts
	I1005 20:36:58.024583  848852 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 20:36:58.024645  848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
	I1005 20:36:58.041908  848852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa Username:docker}
	I1005 20:36:58.138407  848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1005 20:36:58.163830  848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1005 20:36:58.188988  848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 20:36:58.214350  848852 provision.go:86] duration metric: configureAuth took 300.469026ms
	I1005 20:36:58.214380  848852 ubuntu.go:193] setting minikube options for container-runtime
	I1005 20:36:58.214555  848852 config.go:182] Loaded profile config "old-k8s-version-330869": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1005 20:36:58.214618  848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
	I1005 20:36:58.233958  848852 main.go:141] libmachine: Using SSH client type: native
	I1005 20:36:58.234450  848852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1005 20:36:58.234478  848852 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1005 20:36:58.373924  848852 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1005 20:36:58.373955  848852 ubuntu.go:71] root file system type: overlay
	I1005 20:36:58.374081  848852 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1005 20:36:58.374161  848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
	I1005 20:36:58.391540  848852 main.go:141] libmachine: Using SSH client type: native
	I1005 20:36:58.391915  848852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1005 20:36:58.392004  848852 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1005 20:36:58.542978  848852 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1005 20:36:58.543087  848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
	I1005 20:36:58.561956  848852 main.go:141] libmachine: Using SSH client type: native
	I1005 20:36:58.562286  848852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1005 20:36:58.562306  848852 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1005 20:36:59.329516  848852 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:30:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-05 20:36:58.539594498 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1005 20:36:59.329548  848852 machine.go:91] provisioned docker machine in 4.767201085s
	I1005 20:36:59.329560  848852 client.go:171] LocalClient.Create took 11.210532402s
	I1005 20:36:59.329576  848852 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-330869" took 11.210602333s
	I1005 20:36:59.329584  848852 start.go:300] post-start starting for "old-k8s-version-330869" (driver="docker")
	I1005 20:36:59.329604  848852 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 20:36:59.329676  848852 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 20:36:59.329723  848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
	I1005 20:36:59.349346  848852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa Username:docker}
	I1005 20:36:59.447055  848852 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 20:36:59.450698  848852 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 20:36:59.450735  848852 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 20:36:59.450744  848852 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 20:36:59.450751  848852 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1005 20:36:59.450768  848852 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-491115/.minikube/addons for local assets ...
	I1005 20:36:59.450869  848852 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-491115/.minikube/files for local assets ...
	I1005 20:36:59.450940  848852 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem -> 4979262.pem in /etc/ssl/certs
	I1005 20:36:59.451025  848852 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 20:36:59.460458  848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem --> /etc/ssl/certs/4979262.pem (1708 bytes)
	I1005 20:36:59.485789  848852 start.go:303] post-start completed in 156.188398ms
	I1005 20:36:59.486213  848852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-330869
	I1005 20:36:59.505013  848852 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/config.json ...
	I1005 20:36:59.505378  848852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 20:36:59.505437  848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
	I1005 20:36:59.524048  848852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa Username:docker}
	I1005 20:36:59.618225  848852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 20:36:59.622843  848852 start.go:128] duration metric: createHost completed in 11.506445418s
	I1005 20:36:59.622871  848852 start.go:83] releasing machines lock for "old-k8s-version-330869", held for 11.506615462s
	I1005 20:36:59.622945  848852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-330869
	I1005 20:36:59.642431  848852 ssh_runner.go:195] Run: cat /version.json
	I1005 20:36:59.642495  848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
	I1005 20:36:59.642432  848852 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 20:36:59.642605  848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
	I1005 20:36:59.662581  848852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa Username:docker}
	I1005 20:36:59.662737  848852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa Username:docker}
	I1005 20:36:59.853461  848852 ssh_runner.go:195] Run: systemctl --version
	I1005 20:36:59.858207  848852 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 20:36:59.862990  848852 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1005 20:36:59.889483  848852 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1005 20:36:59.889588  848852 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1005 20:36:59.906500  848852 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1005 20:36:59.923250  848852 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1005 20:36:59.923285  848852 start.go:469] detecting cgroup driver to use...
	I1005 20:36:59.923323  848852 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 20:36:59.923474  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 20:36:59.939990  848852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1005 20:36:59.950912  848852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1005 20:36:59.961778  848852 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1005 20:36:59.961837  848852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1005 20:36:59.973102  848852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1005 20:36:59.983067  848852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1005 20:36:59.993850  848852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1005 20:37:00.005096  848852 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1005 20:37:00.014781  848852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1005 20:37:00.025703  848852 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1005 20:37:00.034265  848852 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1005 20:37:00.044006  848852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:37:00.133786  848852 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1005 20:37:00.239384  848852 start.go:469] detecting cgroup driver to use...
	I1005 20:37:00.239442  848852 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 20:37:00.239500  848852 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1005 20:37:00.252124  848852 cruntime.go:277] skipping containerd shutdown because we are bound to it
	I1005 20:37:00.252191  848852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1005 20:37:00.266062  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 20:37:00.286402  848852 ssh_runner.go:195] Run: which cri-dockerd
	I1005 20:37:00.292344  848852 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1005 20:37:00.320044  848852 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1005 20:37:00.339050  848852 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1005 20:37:00.446290  848852 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1005 20:37:00.547903  848852 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1005 20:37:00.548059  848852 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1005 20:37:00.567151  848852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:37:00.649892  848852 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1005 20:37:00.910258  848852 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1005 20:37:00.936286  848852 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1005 20:37:00.969093  848852 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.6 ...
	I1005 20:37:00.969197  848852 cli_runner.go:164] Run: docker network inspect old-k8s-version-330869 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 20:37:00.987767  848852 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1005 20:37:00.991940  848852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:37:01.003986  848852 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1005 20:37:01.004064  848852 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1005 20:37:01.024559  848852 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1005 20:37:01.024582  848852 docker.go:670] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I1005 20:37:01.024625  848852 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1005 20:37:01.034897  848852 ssh_runner.go:195] Run: which lz4
	I1005 20:37:01.039150  848852 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1005 20:37:01.042853  848852 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1005 20:37:01.042881  848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I1005 20:37:01.954367  848852 docker.go:628] Took 0.915253 seconds to copy over tarball
	I1005 20:37:01.954442  848852 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1005 20:37:04.201575  848852 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.247097174s)
	I1005 20:37:04.201615  848852 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1005 20:37:04.268945  848852 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1005 20:37:04.277399  848852 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I1005 20:37:04.295027  848852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:37:04.374758  848852 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1005 20:37:07.041735  848852 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.666927518s)
	I1005 20:37:07.041868  848852 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1005 20:37:07.062423  848852 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1005 20:37:07.062449  848852 docker.go:670] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I1005 20:37:07.062459  848852 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1005 20:37:07.063895  848852 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1005 20:37:07.067879  848852 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1005 20:37:07.067905  848852 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1005 20:37:07.067879  848852 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1005 20:37:07.067879  848852 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1005 20:37:07.067880  848852 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1005 20:37:07.067884  848852 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:37:07.067880  848852 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1005 20:37:07.068476  848852 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1005 20:37:07.068802  848852 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1005 20:37:07.068815  848852 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1005 20:37:07.068891  848852 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1005 20:37:07.068903  848852 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1005 20:37:07.068955  848852 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1005 20:37:07.068977  848852 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1005 20:37:07.068907  848852 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:37:07.235266  848852 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1005 20:37:07.240360  848852 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1005 20:37:07.249270  848852 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1005 20:37:07.257629  848852 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1005 20:37:07.257889  848852 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1005 20:37:07.257939  848852 docker.go:317] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1005 20:37:07.257981  848852 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1005 20:37:07.260526  848852 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1005 20:37:07.262168  848852 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1005 20:37:07.262219  848852 docker.go:317] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1005 20:37:07.262264  848852 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I1005 20:37:07.273697  848852 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1005 20:37:07.275330  848852 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1005 20:37:07.275394  848852 docker.go:317] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1005 20:37:07.275448  848852 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1005 20:37:07.277427  848852 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1005 20:37:07.319579  848852 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1005 20:37:07.319640  848852 docker.go:317] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1005 20:37:07.319701  848852 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I1005 20:37:07.319825  848852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1005 20:37:07.322936  848852 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1005 20:37:07.322996  848852 docker.go:317] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1005 20:37:07.323038  848852 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1005 20:37:07.323053  848852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1005 20:37:07.331388  848852 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1005 20:37:07.331467  848852 docker.go:317] Removing image: registry.k8s.io/pause:3.1
	I1005 20:37:07.331515  848852 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I1005 20:37:07.340975  848852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1005 20:37:07.346081  848852 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1005 20:37:07.346161  848852 docker.go:317] Removing image: registry.k8s.io/coredns:1.6.2
	I1005 20:37:07.346261  848852 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I1005 20:37:07.347802  848852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1005 20:37:07.353891  848852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1005 20:37:07.355556  848852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1005 20:37:07.367929  848852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1005 20:37:07.382091  848852 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:37:07.429006  848852 cache_images.go:92] LoadImages completed in 366.527099ms
	W1005 20:37:07.429128  848852 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17363-491115/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1005 20:37:07.429252  848852 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1005 20:37:07.489573  848852 cni.go:84] Creating CNI manager for ""
	I1005 20:37:07.489598  848852 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1005 20:37:07.489615  848852 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1005 20:37:07.489636  848852 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-330869 NodeName:old-k8s-version-330869 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1005 20:37:07.489779  848852 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-330869"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-330869
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.85.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1005 20:37:07.489853  848852 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-330869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-330869 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1005 20:37:07.489899  848852 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1005 20:37:07.498525  848852 binaries.go:44] Found k8s binaries, skipping transfer
	I1005 20:37:07.498593  848852 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1005 20:37:07.507319  848852 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I1005 20:37:07.526216  848852 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1005 20:37:07.545140  848852 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I1005 20:37:07.564649  848852 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1005 20:37:07.568218  848852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:37:07.579493  848852 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869 for IP: 192.168.85.2
	I1005 20:37:07.579548  848852 certs.go:190] acquiring lock for shared ca certs: {Name:mka6627fa5c31076c5fa233a6bbda946476bff5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:37:07.579720  848852 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17363-491115/.minikube/ca.key
	I1005 20:37:07.579771  848852 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.key
	I1005 20:37:07.579831  848852 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/client.key
	I1005 20:37:07.579853  848852 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/client.crt with IP's: []
	I1005 20:37:07.958797  848852 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/client.crt ...
	I1005 20:37:07.958830  848852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/client.crt: {Name:mk4ed8648d0b7843797ac83f4b98a7e432949205 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:37:07.958989  848852 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/client.key ...
	I1005 20:37:07.959003  848852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/client.key: {Name:mkdeeda61d9b948461727c2c9411c560d4602d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:37:07.959098  848852 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.key.43b9df8c
	I1005 20:37:07.959122  848852 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1005 20:37:08.028205  848852 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.crt.43b9df8c ...
	I1005 20:37:08.028238  848852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.crt.43b9df8c: {Name:mk190cd886cb88264c237696eef655abb98bca69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:37:08.028438  848852 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.key.43b9df8c ...
	I1005 20:37:08.028458  848852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.key.43b9df8c: {Name:mk0015e0633a7ac62eded2fa85365447422119b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:37:08.028553  848852 certs.go:337] copying /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.crt
	I1005 20:37:08.028647  848852 certs.go:341] copying /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.key
	I1005 20:37:08.028722  848852 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/proxy-client.key
	I1005 20:37:08.028743  848852 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/proxy-client.crt with IP's: []
	I1005 20:37:08.294258  848852 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/proxy-client.crt ...
	I1005 20:37:08.294297  848852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/proxy-client.crt: {Name:mkf2ca41570659a17afc4d42fd4914df945ce32f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:37:08.294486  848852 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/proxy-client.key ...
	I1005 20:37:08.294503  848852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/proxy-client.key: {Name:mkc674e75c0a5d5cc7a649ee713bd5202b448a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:37:08.294733  848852 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926.pem (1338 bytes)
	W1005 20:37:08.294776  848852 certs.go:433] ignoring /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926_empty.pem, impossibly tiny 0 bytes
	I1005 20:37:08.294788  848852 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem (1671 bytes)
	I1005 20:37:08.294825  848852 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem (1082 bytes)
	I1005 20:37:08.294854  848852 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem (1123 bytes)
	I1005 20:37:08.294881  848852 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem (1679 bytes)
	I1005 20:37:08.294952  848852 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem (1708 bytes)
	I1005 20:37:08.295568  848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1005 20:37:08.330946  848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1005 20:37:08.379904  848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1005 20:37:08.408268  848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/old-k8s-version-330869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1005 20:37:08.433341  848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1005 20:37:08.460942  848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1005 20:37:08.488803  848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1005 20:37:08.570596  848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1005 20:37:08.596930  848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926.pem --> /usr/share/ca-certificates/497926.pem (1338 bytes)
	I1005 20:37:08.621626  848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem --> /usr/share/ca-certificates/4979262.pem (1708 bytes)
	I1005 20:37:08.712387  848852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1005 20:37:08.738473  848852 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1005 20:37:08.757930  848852 ssh_runner.go:195] Run: openssl version
	I1005 20:37:08.764376  848852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1005 20:37:08.775051  848852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:37:08.778692  848852 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  5 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:37:08.778758  848852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:37:08.785976  848852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1005 20:37:08.796153  848852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/497926.pem && ln -fs /usr/share/ca-certificates/497926.pem /etc/ssl/certs/497926.pem"
	I1005 20:37:08.805780  848852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/497926.pem
	I1005 20:37:08.809761  848852 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  5 20:07 /usr/share/ca-certificates/497926.pem
	I1005 20:37:08.809820  848852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/497926.pem
	I1005 20:37:08.817484  848852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/497926.pem /etc/ssl/certs/51391683.0"
	I1005 20:37:08.827242  848852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4979262.pem && ln -fs /usr/share/ca-certificates/4979262.pem /etc/ssl/certs/4979262.pem"
	I1005 20:37:08.839332  848852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4979262.pem
	I1005 20:37:08.843366  848852 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  5 20:07 /usr/share/ca-certificates/4979262.pem
	I1005 20:37:08.843417  848852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4979262.pem
	I1005 20:37:08.851212  848852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4979262.pem /etc/ssl/certs/3ec20f2e.0"
	I1005 20:37:08.862753  848852 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1005 20:37:08.866859  848852 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1005 20:37:08.866926  848852 kubeadm.go:404] StartCluster: {Name:old-k8s-version-330869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-330869 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:37:08.867049  848852 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1005 20:37:08.887607  848852 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1005 20:37:08.897145  848852 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1005 20:37:08.908022  848852 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1005 20:37:08.908088  848852 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1005 20:37:08.919431  848852 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1005 20:37:08.919493  848852 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1005 20:37:08.992659  848852 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1005 20:37:08.992761  848852 kubeadm.go:322] [preflight] Running pre-flight checks
	I1005 20:37:09.242033  848852 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1005 20:37:09.242121  848852 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-gcp
	I1005 20:37:09.242191  848852 kubeadm.go:322] DOCKER_VERSION: 24.0.6
	I1005 20:37:09.242244  848852 kubeadm.go:322] OS: Linux
	I1005 20:37:09.242309  848852 kubeadm.go:322] CGROUPS_CPU: enabled
	I1005 20:37:09.242377  848852 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1005 20:37:09.242445  848852 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1005 20:37:09.242515  848852 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1005 20:37:09.242594  848852 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1005 20:37:09.242659  848852 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1005 20:37:09.382245  848852 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1005 20:37:09.382418  848852 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1005 20:37:09.382547  848852 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1005 20:37:09.679782  848852 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1005 20:37:09.681184  848852 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1005 20:37:09.688454  848852 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1005 20:37:09.801004  848852 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1005 20:37:09.804219  848852 out.go:204]   - Generating certificates and keys ...
	I1005 20:37:09.804349  848852 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1005 20:37:09.804438  848852 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1005 20:37:10.263893  848852 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1005 20:37:10.663748  848852 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1005 20:37:11.011679  848852 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1005 20:37:11.134670  848852 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1005 20:37:11.266037  848852 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1005 20:37:11.266251  848852 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-330869 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1005 20:37:11.337048  848852 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1005 20:37:11.337384  848852 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-330869 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1005 20:37:11.474296  848852 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1005 20:37:11.546459  848852 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1005 20:37:11.687001  848852 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1005 20:37:11.687129  848852 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1005 20:37:11.798486  848852 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1005 20:37:11.870159  848852 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1005 20:37:11.982702  848852 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1005 20:37:12.166063  848852 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1005 20:37:12.167376  848852 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1005 20:37:12.170016  848852 out.go:204]   - Booting up control plane ...
	I1005 20:37:12.170180  848852 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1005 20:37:12.176340  848852 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1005 20:37:12.217622  848852 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1005 20:37:12.219576  848852 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1005 20:37:12.223059  848852 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1005 20:37:22.225884  848852 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.002739 seconds
	I1005 20:37:22.226037  848852 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1005 20:37:22.239720  848852 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1005 20:37:22.759159  848852 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1005 20:37:22.759416  848852 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-330869 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1005 20:37:23.266801  848852 kubeadm.go:322] [bootstrap-token] Using token: tirqp6.puzpp2xudnf7zigi
	I1005 20:37:23.268588  848852 out.go:204]   - Configuring RBAC rules ...
	I1005 20:37:23.268746  848852 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1005 20:37:23.272113  848852 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1005 20:37:23.276172  848852 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1005 20:37:23.278638  848852 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1005 20:37:23.281472  848852 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1005 20:37:23.337015  848852 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1005 20:37:23.681202  848852 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1005 20:37:23.682945  848852 kubeadm.go:322] 
	I1005 20:37:23.683077  848852 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1005 20:37:23.683110  848852 kubeadm.go:322] 
	I1005 20:37:23.683217  848852 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1005 20:37:23.683230  848852 kubeadm.go:322] 
	I1005 20:37:23.683261  848852 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1005 20:37:23.683343  848852 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1005 20:37:23.683414  848852 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1005 20:37:23.683424  848852 kubeadm.go:322] 
	I1005 20:37:23.683489  848852 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1005 20:37:23.683587  848852 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1005 20:37:23.683685  848852 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1005 20:37:23.683697  848852 kubeadm.go:322] 
	I1005 20:37:23.683810  848852 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1005 20:37:23.683907  848852 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1005 20:37:23.683918  848852 kubeadm.go:322] 
	I1005 20:37:23.684023  848852 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tirqp6.puzpp2xudnf7zigi \
	I1005 20:37:23.684158  848852 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:1a3efe62433952af74d7dd241658b1c6e6ef634460498e5c06f52126617f7626 \
	I1005 20:37:23.684198  848852 kubeadm.go:322]     --control-plane 	  
	I1005 20:37:23.684207  848852 kubeadm.go:322] 
	I1005 20:37:23.684311  848852 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1005 20:37:23.684322  848852 kubeadm.go:322] 
	I1005 20:37:23.684421  848852 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tirqp6.puzpp2xudnf7zigi \
	I1005 20:37:23.684557  848852 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:1a3efe62433952af74d7dd241658b1c6e6ef634460498e5c06f52126617f7626 
	I1005 20:37:23.687172  848852 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1005 20:37:23.687372  848852 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
	I1005 20:37:23.687626  848852 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-gcp\n", err: exit status 1
	I1005 20:37:23.687719  848852 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1005 20:37:23.687775  848852 cni.go:84] Creating CNI manager for ""
	I1005 20:37:23.687807  848852 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1005 20:37:23.687850  848852 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1005 20:37:23.687919  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:23.687921  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53 minikube.k8s.io/name=old-k8s-version-330869 minikube.k8s.io/updated_at=2023_10_05T20_37_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:24.028458  848852 ops.go:34] apiserver oom_adj: -16
	I1005 20:37:24.028564  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:24.123626  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:24.700769  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:25.200541  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:25.700504  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:26.200936  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:26.700944  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:27.200381  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:27.700490  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:28.200469  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:28.700957  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:29.200664  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:29.700412  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:30.201008  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:30.700269  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:31.201263  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:31.701305  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:32.200694  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:32.700372  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:33.201358  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:33.700533  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:34.201302  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:34.700298  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:35.201106  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:35.701378  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:36.200933  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:36.700418  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:37.200435  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:37.700370  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:38.200992  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:38.700590  848852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:37:38.851796  848852 kubeadm.go:1081] duration metric: took 15.163932557s to wait for elevateKubeSystemPrivileges.
	I1005 20:37:38.851833  848852 kubeadm.go:406] StartCluster complete in 29.984923035s
	I1005 20:37:38.851852  848852 settings.go:142] acquiring lock: {Name:mk74c5e95d8c9fcaf06097e6d304129504752ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:37:38.851923  848852 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17363-491115/kubeconfig
	I1005 20:37:38.853026  848852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/kubeconfig: {Name:mkd6618cb8d42fbccf8ec108c3891f3e690ff249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:37:38.900545  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1005 20:37:38.900672  848852 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1005 20:37:38.900748  848852 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-330869"
	I1005 20:37:38.900763  848852 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-330869"
	I1005 20:37:38.900800  848852 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-330869"
	I1005 20:37:38.900831  848852 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-330869"
	I1005 20:37:38.900871  848852 host.go:66] Checking if "old-k8s-version-330869" exists ...
	I1005 20:37:38.900834  848852 config.go:182] Loaded profile config "old-k8s-version-330869": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1005 20:37:38.901374  848852 cli_runner.go:164] Run: docker container inspect old-k8s-version-330869 --format={{.State.Status}}
	I1005 20:37:38.901811  848852 cli_runner.go:164] Run: docker container inspect old-k8s-version-330869 --format={{.State.Status}}
	I1005 20:37:38.928479  848852 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-330869"
	I1005 20:37:38.978125  848852 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:37:38.994806  848852 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 20:37:38.994831  848852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1005 20:37:38.994905  848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
	I1005 20:37:38.978173  848852 host.go:66] Checking if "old-k8s-version-330869" exists ...
	I1005 20:37:38.940356  848852 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-330869" context rescaled to 1 replicas
	I1005 20:37:38.995184  848852 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1005 20:37:38.997785  848852 out.go:177] * Verifying Kubernetes components...
	I1005 20:37:38.995670  848852 cli_runner.go:164] Run: docker container inspect old-k8s-version-330869 --format={{.State.Status}}
	I1005 20:37:39.002134  848852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:37:39.033883  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1005 20:37:39.035485  848852 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-330869" to be "Ready" ...
	I1005 20:37:39.038127  848852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa Username:docker}
	I1005 20:37:39.042155  848852 node_ready.go:49] node "old-k8s-version-330869" has status "Ready":"True"
	I1005 20:37:39.042182  848852 node_ready.go:38] duration metric: took 6.650275ms waiting for node "old-k8s-version-330869" to be "Ready" ...
	I1005 20:37:39.042195  848852 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 20:37:39.050325  848852 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace to be "Ready" ...
	I1005 20:37:39.063872  848852 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1005 20:37:39.063896  848852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1005 20:37:39.063957  848852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330869
	I1005 20:37:39.093129  848852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/old-k8s-version-330869/id_rsa Username:docker}
	I1005 20:37:39.249557  848852 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 20:37:39.355969  848852 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1005 20:37:39.825436  848852 start.go:923] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1005 20:37:40.330580  848852 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1005 20:37:40.332220  848852 addons.go:502] enable addons completed in 1.431540104s: enabled=[storage-provisioner default-storageclass]
	I1005 20:37:41.094031  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:37:43.593508  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:37:45.594037  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:37:48.093420  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:37:50.093978  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:37:52.216164  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:37:54.592609  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:37:56.592855  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:37:58.593029  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:00.593273  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:02.593404  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:04.593563  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:07.094365  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:09.593999  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:12.093492  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:14.093641  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:16.094034  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:18.592980  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:21.094297  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:23.095657  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:25.593120  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:27.593680  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:30.093875  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:32.593912  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:35.094614  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:37.592828  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:39.593194  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:41.593355  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:44.093479  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:46.093787  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:48.592664  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:50.593150  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:52.593180  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:55.093045  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:57.592904  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:38:59.594150  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:02.094172  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:04.593193  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:06.593519  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:09.093086  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:11.093745  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:13.593125  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:16.092776  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:18.093398  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:20.093745  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:22.096037  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:24.593732  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:26.594148  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:29.092714  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:31.093472  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:33.593280  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:35.593425  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:37.593563  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:39.593680  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:42.093105  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:44.592896  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:46.593130  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:48.593726  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:51.093871  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:53.595038  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:56.092671  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:39:58.093532  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:00.093699  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:02.593675  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:05.092805  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:07.093455  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:09.093550  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:11.592886  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:13.593370  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:15.593878  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:17.594124  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:20.093466  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:22.592921  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:25.093952  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:27.593123  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:29.593520  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:31.593667  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:34.093053  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:36.592982  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:38.593683  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:41.093063  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:43.093386  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:45.093786  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:47.593705  848852 pod_ready.go:102] pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace has status "Ready":"False"
	I1005 20:40:50.093878  848852 pod_ready.go:97] node "old-k8s-version-330869" hosting pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-330869" has status "Ready":"False"
	I1005 20:40:50.093914  848852 pod_ready.go:81] duration metric: took 3m11.043547943s waiting for pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace to be "Ready" ...
	E1005 20:40:50.093927  848852 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-330869" hosting pod "coredns-5644d7b6d9-k2f47" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-330869" has status "Ready":"False"
	I1005 20:40:50.093939  848852 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-wmjhd" in "kube-system" namespace to be "Ready" ...
	I1005 20:40:50.095804  848852 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-wmjhd" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-wmjhd" not found
	I1005 20:40:50.095829  848852 pod_ready.go:81] duration metric: took 1.881885ms waiting for pod "coredns-5644d7b6d9-wmjhd" in "kube-system" namespace to be "Ready" ...
	E1005 20:40:50.095838  848852 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-wmjhd" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-wmjhd" not found
	I1005 20:40:50.095844  848852 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n9cwb" in "kube-system" namespace to be "Ready" ...
	I1005 20:40:50.099963  848852 pod_ready.go:97] node "old-k8s-version-330869" hosting pod "kube-proxy-n9cwb" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-330869" has status "Ready":"False"
	I1005 20:40:50.099987  848852 pod_ready.go:81] duration metric: took 4.137428ms waiting for pod "kube-proxy-n9cwb" in "kube-system" namespace to be "Ready" ...
	E1005 20:40:50.099995  848852 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-330869" hosting pod "kube-proxy-n9cwb" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-330869" has status "Ready":"False"
	I1005 20:40:50.100000  848852 pod_ready.go:38] duration metric: took 3m11.057792461s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 20:40:50.100021  848852 api_server.go:52] waiting for apiserver process to appear ...
	I1005 20:40:50.100077  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1005 20:40:50.119441  848852 logs.go:284] 1 containers: [91420fd2d357]
	I1005 20:40:50.119509  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1005 20:40:50.139691  848852 logs.go:284] 1 containers: [530e42b9f6c7]
	I1005 20:40:50.139783  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1005 20:40:50.159543  848852 logs.go:284] 1 containers: [9f0be3358486]
	I1005 20:40:50.159617  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1005 20:40:50.178626  848852 logs.go:284] 1 containers: [a576da8318f8]
	I1005 20:40:50.178709  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1005 20:40:50.199418  848852 logs.go:284] 1 containers: [cef84f5b51c4]
	I1005 20:40:50.199509  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1005 20:40:50.218722  848852 logs.go:284] 1 containers: [6c66019a6e01]
	I1005 20:40:50.218816  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1005 20:40:50.238070  848852 logs.go:284] 0 containers: []
	W1005 20:40:50.238096  848852 logs.go:286] No container was found matching "kindnet"
	I1005 20:40:50.238112  848852 logs.go:123] Gathering logs for kube-scheduler [a576da8318f8] ...
	I1005 20:40:50.238131  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a576da8318f8"
	I1005 20:40:50.269507  848852 logs.go:123] Gathering logs for kube-proxy [cef84f5b51c4] ...
	I1005 20:40:50.269545  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef84f5b51c4"
	I1005 20:40:50.291677  848852 logs.go:123] Gathering logs for kube-controller-manager [6c66019a6e01] ...
	I1005 20:40:50.291716  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66019a6e01"
	I1005 20:40:50.326025  848852 logs.go:123] Gathering logs for kubelet ...
	I1005 20:40:50.326065  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1005 20:40:50.379350  848852 logs.go:123] Gathering logs for dmesg ...
	I1005 20:40:50.379395  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1005 20:40:50.406532  848852 logs.go:123] Gathering logs for kube-apiserver [91420fd2d357] ...
	I1005 20:40:50.406577  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91420fd2d357"
	I1005 20:40:50.437288  848852 logs.go:123] Gathering logs for Docker ...
	I1005 20:40:50.437326  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1005 20:40:50.455844  848852 logs.go:123] Gathering logs for container status ...
	I1005 20:40:50.455880  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1005 20:40:50.498500  848852 logs.go:123] Gathering logs for describe nodes ...
	I1005 20:40:50.498530  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1005 20:40:50.595652  848852 logs.go:123] Gathering logs for etcd [530e42b9f6c7] ...
	I1005 20:40:50.595696  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 530e42b9f6c7"
	I1005 20:40:50.620275  848852 logs.go:123] Gathering logs for coredns [9f0be3358486] ...
	I1005 20:40:50.620312  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0be3358486"
	I1005 20:40:53.146871  848852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:40:53.159851  848852 api_server.go:72] duration metric: took 3m14.164619226s to wait for apiserver process to appear ...
	I1005 20:40:53.159878  848852 api_server.go:88] waiting for apiserver healthz status ...
	I1005 20:40:53.159968  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1005 20:40:53.180019  848852 logs.go:284] 1 containers: [91420fd2d357]
	I1005 20:40:53.180088  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1005 20:40:53.199435  848852 logs.go:284] 1 containers: [530e42b9f6c7]
	I1005 20:40:53.199525  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1005 20:40:53.218828  848852 logs.go:284] 1 containers: [9f0be3358486]
	I1005 20:40:53.218918  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1005 20:40:53.238489  848852 logs.go:284] 1 containers: [a576da8318f8]
	I1005 20:40:53.238558  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1005 20:40:53.258800  848852 logs.go:284] 1 containers: [cef84f5b51c4]
	I1005 20:40:53.258880  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1005 20:40:53.278841  848852 logs.go:284] 1 containers: [6c66019a6e01]
	I1005 20:40:53.278928  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1005 20:40:53.298156  848852 logs.go:284] 0 containers: []
	W1005 20:40:53.298184  848852 logs.go:286] No container was found matching "kindnet"
	I1005 20:40:53.298203  848852 logs.go:123] Gathering logs for kube-apiserver [91420fd2d357] ...
	I1005 20:40:53.298228  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91420fd2d357"
	I1005 20:40:53.328555  848852 logs.go:123] Gathering logs for kube-scheduler [a576da8318f8] ...
	I1005 20:40:53.328591  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a576da8318f8"
	I1005 20:40:53.356192  848852 logs.go:123] Gathering logs for kube-controller-manager [6c66019a6e01] ...
	I1005 20:40:53.356236  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66019a6e01"
	I1005 20:40:53.408101  848852 logs.go:123] Gathering logs for describe nodes ...
	I1005 20:40:53.408142  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1005 20:40:53.504189  848852 logs.go:123] Gathering logs for dmesg ...
	I1005 20:40:53.504223  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1005 20:40:53.530244  848852 logs.go:123] Gathering logs for etcd [530e42b9f6c7] ...
	I1005 20:40:53.530279  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 530e42b9f6c7"
	I1005 20:40:53.554335  848852 logs.go:123] Gathering logs for coredns [9f0be3358486] ...
	I1005 20:40:53.554369  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0be3358486"
	I1005 20:40:53.575206  848852 logs.go:123] Gathering logs for kube-proxy [cef84f5b51c4] ...
	I1005 20:40:53.575234  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef84f5b51c4"
	I1005 20:40:53.597030  848852 logs.go:123] Gathering logs for Docker ...
	I1005 20:40:53.597062  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1005 20:40:53.614535  848852 logs.go:123] Gathering logs for container status ...
	I1005 20:40:53.614572  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1005 20:40:53.654139  848852 logs.go:123] Gathering logs for kubelet ...
	I1005 20:40:53.654172  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1005 20:40:56.213313  848852 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1005 20:40:56.218410  848852 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1005 20:40:56.219267  848852 api_server.go:141] control plane version: v1.16.0
	I1005 20:40:56.219291  848852 api_server.go:131] duration metric: took 3.059406313s to wait for apiserver health ...
	I1005 20:40:56.219299  848852 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 20:40:56.219366  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1005 20:40:56.238449  848852 logs.go:284] 1 containers: [91420fd2d357]
	I1005 20:40:56.238527  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1005 20:40:56.258629  848852 logs.go:284] 1 containers: [530e42b9f6c7]
	I1005 20:40:56.258720  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1005 20:40:56.278969  848852 logs.go:284] 1 containers: [9f0be3358486]
	I1005 20:40:56.279060  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1005 20:40:56.299094  848852 logs.go:284] 1 containers: [a576da8318f8]
	I1005 20:40:56.299162  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1005 20:40:56.318944  848852 logs.go:284] 1 containers: [cef84f5b51c4]
	I1005 20:40:56.319016  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1005 20:40:56.338209  848852 logs.go:284] 1 containers: [6c66019a6e01]
	I1005 20:40:56.338283  848852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1005 20:40:56.357954  848852 logs.go:284] 0 containers: []
	W1005 20:40:56.357976  848852 logs.go:286] No container was found matching "kindnet"
	I1005 20:40:56.358002  848852 logs.go:123] Gathering logs for dmesg ...
	I1005 20:40:56.358017  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1005 20:40:56.385551  848852 logs.go:123] Gathering logs for coredns [9f0be3358486] ...
	I1005 20:40:56.385589  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0be3358486"
	I1005 20:40:56.406776  848852 logs.go:123] Gathering logs for kube-scheduler [a576da8318f8] ...
	I1005 20:40:56.406809  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a576da8318f8"
	I1005 20:40:56.432902  848852 logs.go:123] Gathering logs for kube-proxy [cef84f5b51c4] ...
	I1005 20:40:56.432934  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cef84f5b51c4"
	I1005 20:40:56.454562  848852 logs.go:123] Gathering logs for Docker ...
	I1005 20:40:56.454590  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1005 20:40:56.472931  848852 logs.go:123] Gathering logs for kubelet ...
	I1005 20:40:56.472968  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1005 20:40:56.526518  848852 logs.go:123] Gathering logs for describe nodes ...
	I1005 20:40:56.526560  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1005 20:40:56.622248  848852 logs.go:123] Gathering logs for kube-apiserver [91420fd2d357] ...
	I1005 20:40:56.622279  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91420fd2d357"
	I1005 20:40:56.655051  848852 logs.go:123] Gathering logs for etcd [530e42b9f6c7] ...
	I1005 20:40:56.655087  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 530e42b9f6c7"
	I1005 20:40:56.678429  848852 logs.go:123] Gathering logs for kube-controller-manager [6c66019a6e01] ...
	I1005 20:40:56.678461  848852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66019a6e01"
	I1005 20:40:56.713931  848852 logs.go:123] Gathering logs for container status ...
	I1005 20:40:56.713968  848852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1005 20:40:59.261292  848852 system_pods.go:59] 7 kube-system pods found
	I1005 20:40:59.261354  848852 system_pods.go:61] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:40:59.261362  848852 system_pods.go:61] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:40:59.261367  848852 system_pods.go:61] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:40:59.261373  848852 system_pods.go:61] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:40:59.261378  848852 system_pods.go:61] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:40:59.261383  848852 system_pods.go:61] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:40:59.261391  848852 system_pods.go:61] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:40:59.261404  848852 system_pods.go:74] duration metric: took 3.042098794s to wait for pod list to return data ...
	I1005 20:40:59.261414  848852 default_sa.go:34] waiting for default service account to be created ...
	I1005 20:40:59.263601  848852 default_sa.go:45] found service account: "default"
	I1005 20:40:59.263627  848852 default_sa.go:55] duration metric: took 2.205092ms for default service account to be created ...
	I1005 20:40:59.263637  848852 system_pods.go:116] waiting for k8s-apps to be running ...
	I1005 20:40:59.266968  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:40:59.266993  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:40:59.267003  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:40:59.267008  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:40:59.267013  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:40:59.267018  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:40:59.267023  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:40:59.267032  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:40:59.267056  848852 retry.go:31] will retry after 300.792529ms: missing components: kube-dns, kube-proxy
	I1005 20:40:59.572465  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:40:59.572493  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:40:59.572500  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:40:59.572505  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:40:59.572510  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:40:59.572515  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:40:59.572520  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:40:59.572527  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:40:59.572542  848852 retry.go:31] will retry after 328.691351ms: missing components: kube-dns, kube-proxy
	I1005 20:40:59.906606  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:40:59.906646  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:40:59.906656  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:40:59.906663  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:40:59.906671  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:40:59.906678  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:40:59.906688  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:40:59.906699  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:40:59.906725  848852 retry.go:31] will retry after 343.915985ms: missing components: kube-dns, kube-proxy
	I1005 20:41:00.254958  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:41:00.254992  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:41:00.255001  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:41:00.255008  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:41:00.255017  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:41:00.255025  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:41:00.255033  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:41:00.255043  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:41:00.255068  848852 retry.go:31] will retry after 518.63445ms: missing components: kube-dns, kube-proxy
	I1005 20:41:00.778717  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:41:00.778748  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:41:00.778756  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:41:00.778761  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:41:00.778767  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:41:00.778773  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:41:00.778778  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:41:00.778784  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:41:00.778800  848852 retry.go:31] will retry after 562.821701ms: missing components: kube-dns, kube-proxy
	I1005 20:41:01.346346  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:41:01.346375  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:41:01.346382  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:41:01.346387  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:41:01.346393  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:41:01.346398  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:41:01.346405  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:41:01.346411  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:41:01.346428  848852 retry.go:31] will retry after 650.216203ms: missing components: kube-dns, kube-proxy
	I1005 20:41:02.002459  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:41:02.002570  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:41:02.002590  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:41:02.002608  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:41:02.002624  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:41:02.002648  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:41:02.002664  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:41:02.002683  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:41:02.002711  848852 retry.go:31] will retry after 760.00556ms: missing components: kube-dns, kube-proxy
	I1005 20:41:02.766915  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:41:02.766945  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:41:02.766953  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:41:02.766958  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:41:02.766963  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:41:02.766969  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:41:02.766974  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:41:02.766981  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:41:02.766998  848852 retry.go:31] will retry after 1.096256845s: missing components: kube-dns, kube-proxy
	I1005 20:41:03.868393  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:41:03.868432  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:41:03.868441  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:41:03.868448  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:41:03.868456  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:41:03.868463  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:41:03.868472  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:41:03.868483  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:41:03.868508  848852 retry.go:31] will retry after 1.275861458s: missing components: kube-dns, kube-proxy
	I1005 20:41:05.148320  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:41:05.148350  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:41:05.148357  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:41:05.148362  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:41:05.148367  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:41:05.148373  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:41:05.148379  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:41:05.148385  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:41:05.148402  848852 retry.go:31] will retry after 1.401487372s: missing components: kube-dns, kube-proxy
	I1005 20:41:06.554819  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:41:06.554857  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:41:06.554867  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:41:06.554878  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:41:06.554886  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:41:06.554894  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:41:06.554903  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:41:06.554909  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:41:06.554929  848852 retry.go:31] will retry after 1.850633234s: missing components: kube-dns, kube-proxy
	I1005 20:41:08.410662  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:41:08.410692  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:41:08.410699  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:41:08.410704  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:41:08.410709  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:41:08.410715  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:41:08.410720  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:41:08.410726  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:41:08.410742  848852 retry.go:31] will retry after 3.472865824s: missing components: kube-dns, kube-proxy
	I1005 20:41:11.889408  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:41:11.889447  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:41:11.889455  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:41:11.889460  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:41:11.889465  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:41:11.889471  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:41:11.889476  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:41:11.889483  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:41:11.889500  848852 retry.go:31] will retry after 3.085936718s: missing components: kube-dns, kube-proxy
	I1005 20:41:14.981245  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:41:14.981284  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:41:14.981295  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:41:14.981304  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:41:14.981313  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:41:14.981325  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:41:14.981336  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:41:14.981347  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:41:14.981366  848852 retry.go:31] will retry after 4.272914778s: missing components: kube-dns, kube-proxy
	I1005 20:41:19.259463  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:41:19.259496  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:41:19.259503  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:41:19.259509  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:41:19.259513  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:41:19.259519  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:41:19.259524  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:41:19.259530  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:41:19.259549  848852 retry.go:31] will retry after 5.262882276s: missing components: kube-dns, kube-proxy
	I1005 20:41:24.526746  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:41:24.526779  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:41:24.526786  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:41:24.526792  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:41:24.526796  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:41:24.526801  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:41:24.526806  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:41:24.526812  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:41:24.526831  848852 retry.go:31] will retry after 6.668638073s: missing components: kube-dns, kube-proxy
	I1005 20:41:31.201287  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:41:31.201327  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:41:31.201337  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:41:31.201346  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:41:31.201353  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:41:31.201360  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:41:31.201369  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:41:31.201383  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:41:31.201407  848852 retry.go:31] will retry after 9.396673494s: missing components: kube-dns, kube-proxy
	I1005 20:41:40.603038  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:41:40.603071  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:41:40.603078  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:41:40.603085  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:41:40.603090  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:41:40.603096  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:41:40.603101  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:41:40.603137  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:41:40.603156  848852 retry.go:31] will retry after 13.83982148s: missing components: kube-dns, kube-proxy
	I1005 20:41:54.447269  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:41:54.447300  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:41:54.447307  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:41:54.447315  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:41:54.447320  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:41:54.447325  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:41:54.447330  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:41:54.447336  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:41:54.447351  848852 retry.go:31] will retry after 16.909017562s: missing components: kube-dns, kube-proxy
	I1005 20:42:11.362760  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:42:11.362798  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:42:11.362808  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:42:11.362816  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:42:11.362824  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:42:11.362833  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:42:11.362844  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:42:11.362857  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:42:11.362886  848852 retry.go:31] will retry after 13.151324006s: missing components: kube-dns, kube-proxy
	I1005 20:42:24.519701  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:42:24.519745  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:42:24.519756  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:42:24.519764  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:42:24.519771  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:42:24.519777  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:42:24.519784  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:42:24.519800  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:42:24.519823  848852 retry.go:31] will retry after 19.438415105s: missing components: kube-dns, kube-proxy
	I1005 20:42:43.963102  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:42:43.963137  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:42:43.963145  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:42:43.963150  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:42:43.963154  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:42:43.963160  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:42:43.963165  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:42:43.963171  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:42:43.963192  848852 retry.go:31] will retry after 27.185744025s: missing components: kube-dns, kube-proxy
	I1005 20:43:11.154216  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:43:11.154250  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:43:11.154258  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:43:11.154263  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:43:11.154269  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:43:11.154274  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:43:11.154281  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:43:11.154287  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:43:11.154303  848852 retry.go:31] will retry after 30.621447152s: missing components: kube-dns, kube-proxy
	I1005 20:43:41.781018  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:43:41.781059  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:43:41.781067  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:43:41.781072  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:43:41.781078  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:43:41.781085  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:43:41.781091  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:43:41.781101  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:43:41.781125  848852 retry.go:31] will retry after 48.291810362s: missing components: kube-dns, kube-proxy
	I1005 20:44:30.078532  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:44:30.078565  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:44:30.078577  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:44:30.078585  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:44:30.078593  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:44:30.078602  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:44:30.078609  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:44:30.078619  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:44:30.078642  848852 retry.go:31] will retry after 45.333697219s: missing components: kube-dns, kube-proxy
	I1005 20:45:15.417486  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:45:15.417531  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:45:15.417542  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:45:15.417550  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:45:15.417558  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:45:15.417565  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:45:15.417579  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:45:15.417589  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:45:15.417621  848852 retry.go:31] will retry after 1m12.232820849s: missing components: kube-dns, kube-proxy
	I1005 20:46:27.657681  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:46:27.657726  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:46:27.657736  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:46:27.657741  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:46:27.657747  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:46:27.657753  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:46:27.657758  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:46:27.657766  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:46:27.660128  848852 out.go:177] 
	W1005 20:46:27.662010  848852 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns, kube-proxy
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns, kube-proxy
	W1005 20:46:27.662024  848852 out.go:239] * 
	* 
	W1005 20:46:27.662801  848852 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1005 20:46:27.665143  848852 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-330869 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-330869
helpers_test.go:235: (dbg) docker inspect old-k8s-version-330869:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9",
	        "Created": "2023-10-05T20:36:53.706444621Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 850463,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-05T20:36:54.08354438Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:94671ba3754e2c6976414eaf20a0c7861a5d2f9fc631e1161e8ab0ded9062c52",
	        "ResolvConfPath": "/var/lib/docker/containers/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9/hostname",
	        "HostsPath": "/var/lib/docker/containers/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9/hosts",
	        "LogPath": "/var/lib/docker/containers/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9-json.log",
	        "Name": "/old-k8s-version-330869",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-330869:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-330869",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0d686d9896e444b3737621f3bb0399b6dd3266180ac56b4497ffd2325577edca-init/diff:/var/lib/docker/overlay2/e65b3f74dc6bfb6767eea300df98bf2be99245c1b234ea43800cf021cd81177d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d686d9896e444b3737621f3bb0399b6dd3266180ac56b4497ffd2325577edca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d686d9896e444b3737621f3bb0399b6dd3266180ac56b4497ffd2325577edca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d686d9896e444b3737621f3bb0399b6dd3266180ac56b4497ffd2325577edca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-330869",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-330869/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-330869",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-330869",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-330869",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e9d5900763ffac860582f91e1cc24789bad5009ed40771fbeb5d999159eee780",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33383"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33382"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33379"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33381"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33380"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e9d5900763ff",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-330869": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0ffb18ccd18d",
	                        "old-k8s-version-330869"
	                    ],
	                    "NetworkID": "b2ec8c9cc8a493d14667efb735586eda5a96dcf492505b426d598dbb05a7c972",
	                    "EndpointID": "75ea406fffba6499772cde5d775de2d6bc83b43060d83c230ed678cdae12bc5e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330869 -n old-k8s-version-330869
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-330869 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p embed-certs-411409 sudo                             | embed-certs-411409           | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-411409                                  | embed-certs-411409           | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-411409                                  | embed-certs-411409           | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-411409                                  | embed-certs-411409           | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	| delete  | -p embed-certs-411409                                  | embed-certs-411409           | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	| start   | -p newest-cni-251602 --memory=2200 --alsologtostderr   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:45 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=docker            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p                                                     | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | default-k8s-diff-port-973002                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | default-k8s-diff-port-973002                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | default-k8s-diff-port-973002                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | default-k8s-diff-port-973002                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | default-k8s-diff-port-973002                           |                              |         |         |                     |                     |
	| ssh     | -p no-preload-477708 sudo                              | no-preload-477708            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p no-preload-477708                                   | no-preload-477708            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-477708                                   | no-preload-477708            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-477708                                   | no-preload-477708            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	| delete  | -p no-preload-477708                                   | no-preload-477708            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	| addons  | enable metrics-server -p newest-cni-251602             | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-251602                                   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-251602                  | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-251602 --memory=2200 --alsologtostderr   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=docker            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-251602 sudo                              | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-251602                                   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-251602                                   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-251602                                   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	| delete  | -p newest-cni-251602                                   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 20:45:14
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 20:45:14.405012  941739 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:45:14.405318  941739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:45:14.405332  941739 out.go:309] Setting ErrFile to fd 2...
	I1005 20:45:14.405338  941739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:45:14.405563  941739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-491115/.minikube/bin
	I1005 20:45:14.406125  941739 out.go:303] Setting JSON to false
	I1005 20:45:14.408036  941739 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8863,"bootTime":1696529852,"procs":691,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:45:14.408112  941739 start.go:138] virtualization: kvm guest
	I1005 20:45:14.411041  941739 out.go:177] * [newest-cni-251602] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:45:14.412825  941739 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:45:14.414496  941739 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:45:14.412885  941739 notify.go:220] Checking for updates...
	I1005 20:45:14.417488  941739 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig
	I1005 20:45:14.419444  941739 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube
	I1005 20:45:14.420812  941739 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:45:14.422387  941739 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:45:14.424417  941739 config.go:182] Loaded profile config "newest-cni-251602": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1005 20:45:14.424920  941739 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:45:14.447137  941739 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:45:14.447233  941739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:45:14.502313  941739 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-05 20:45:14.492746667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:45:14.502465  941739 docker.go:294] overlay module found
	I1005 20:45:14.504743  941739 out.go:177] * Using the docker driver based on existing profile
	I1005 20:45:14.506376  941739 start.go:298] selected driver: docker
	I1005 20:45:14.506399  941739 start.go:902] validating driver "docker" against &{Name:newest-cni-251602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-251602 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:45:14.506507  941739 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:45:14.507273  941739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:45:14.559655  941739 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-05 20:45:14.550952004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:45:14.560012  941739 start_flags.go:942] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1005 20:45:14.560046  941739 cni.go:84] Creating CNI manager for ""
	I1005 20:45:14.560066  941739 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1005 20:45:14.560079  941739 start_flags.go:321] config:
	{Name:newest-cni-251602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-251602 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:45:14.562326  941739 out.go:177] * Starting control plane node newest-cni-251602 in cluster newest-cni-251602
	I1005 20:45:14.565495  941739 cache.go:122] Beginning downloading kic base image for docker with docker
	I1005 20:45:14.567000  941739 out.go:177] * Pulling base image ...
	I1005 20:45:14.568566  941739 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1005 20:45:14.568620  941739 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1005 20:45:14.568631  941739 cache.go:57] Caching tarball of preloaded images
	I1005 20:45:14.568707  941739 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 20:45:14.568717  941739 preload.go:174] Found /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1005 20:45:14.568791  941739 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1005 20:45:14.568916  941739 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/config.json ...
	I1005 20:45:14.586420  941739 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1005 20:45:14.586452  941739 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1005 20:45:14.586477  941739 cache.go:195] Successfully downloaded all kic artifacts
	I1005 20:45:14.586522  941739 start.go:365] acquiring machines lock for newest-cni-251602: {Name:mkefe4baf7b8136c10dd9c20a98860ec3c495766 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:45:14.586596  941739 start.go:369] acquired machines lock for "newest-cni-251602" in 47.72µs
	I1005 20:45:14.586622  941739 start.go:96] Skipping create...Using existing machine configuration
	I1005 20:45:14.586642  941739 fix.go:54] fixHost starting: 
	I1005 20:45:14.587273  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:14.605317  941739 fix.go:102] recreateIfNeeded on newest-cni-251602: state=Stopped err=<nil>
	W1005 20:45:14.605354  941739 fix.go:128] unexpected machine state, will restart: <nil>
	I1005 20:45:14.607609  941739 out.go:177] * Restarting existing docker container for "newest-cni-251602" ...
	I1005 20:45:15.417486  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:45:15.417531  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:45:15.417542  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:45:15.417550  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:45:15.417558  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:45:15.417565  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:45:15.417579  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:45:15.417589  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:45:15.417621  848852 retry.go:31] will retry after 1m12.232820849s: missing components: kube-dns, kube-proxy
	I1005 20:45:14.609066  941739 cli_runner.go:164] Run: docker start newest-cni-251602
	I1005 20:45:14.897686  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:14.916217  941739 kic.go:426] container "newest-cni-251602" state is running.
	I1005 20:45:14.916594  941739 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-251602
	I1005 20:45:14.935722  941739 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/config.json ...
	I1005 20:45:14.935987  941739 machine.go:88] provisioning docker machine ...
	I1005 20:45:14.936015  941739 ubuntu.go:169] provisioning hostname "newest-cni-251602"
	I1005 20:45:14.936080  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:14.954269  941739 main.go:141] libmachine: Using SSH client type: native
	I1005 20:45:14.954655  941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1005 20:45:14.954675  941739 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-251602 && echo "newest-cni-251602" | sudo tee /etc/hostname
	I1005 20:45:14.955367  941739 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60694->127.0.0.1:33423: read: connection reset by peer
	I1005 20:45:18.101383  941739 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-251602
	
	I1005 20:45:18.101493  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:18.118632  941739 main.go:141] libmachine: Using SSH client type: native
	I1005 20:45:18.118970  941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1005 20:45:18.118988  941739 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-251602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-251602/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-251602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 20:45:18.254181  941739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 20:45:18.254212  941739 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-491115/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-491115/.minikube}
	I1005 20:45:18.254247  941739 ubuntu.go:177] setting up certificates
	I1005 20:45:18.254259  941739 provision.go:83] configureAuth start
	I1005 20:45:18.254314  941739 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-251602
	I1005 20:45:18.271133  941739 provision.go:138] copyHostCerts
	I1005 20:45:18.271209  941739 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem, removing ...
	I1005 20:45:18.271225  941739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem
	I1005 20:45:18.271301  941739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem (1082 bytes)
	I1005 20:45:18.271415  941739 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem, removing ...
	I1005 20:45:18.271430  941739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem
	I1005 20:45:18.271455  941739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem (1123 bytes)
	I1005 20:45:18.271518  941739 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem, removing ...
	I1005 20:45:18.271526  941739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem
	I1005 20:45:18.271548  941739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem (1679 bytes)
	I1005 20:45:18.271607  941739 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem org=jenkins.newest-cni-251602 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-251602]
	I1005 20:45:18.410529  941739 provision.go:172] copyRemoteCerts
	I1005 20:45:18.410591  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 20:45:18.410642  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:18.427655  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:18.525913  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 20:45:18.548522  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1005 20:45:18.571080  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1005 20:45:18.594270  941739 provision.go:86] duration metric: configureAuth took 339.997588ms
	I1005 20:45:18.594302  941739 ubuntu.go:193] setting minikube options for container-runtime
	I1005 20:45:18.594515  941739 config.go:182] Loaded profile config "newest-cni-251602": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1005 20:45:18.594580  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:18.611692  941739 main.go:141] libmachine: Using SSH client type: native
	I1005 20:45:18.612072  941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1005 20:45:18.612089  941739 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1005 20:45:18.745964  941739 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1005 20:45:18.745987  941739 ubuntu.go:71] root file system type: overlay
	I1005 20:45:18.746127  941739 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1005 20:45:18.746195  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:18.763221  941739 main.go:141] libmachine: Using SSH client type: native
	I1005 20:45:18.763676  941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1005 20:45:18.763773  941739 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1005 20:45:18.908747  941739 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1005 20:45:18.908833  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:18.927242  941739 main.go:141] libmachine: Using SSH client type: native
	I1005 20:45:18.927586  941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1005 20:45:18.927612  941739 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1005 20:45:19.070807  941739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 20:45:19.070845  941739 machine.go:91] provisioned docker machine in 4.134838843s
	I1005 20:45:19.070863  941739 start.go:300] post-start starting for "newest-cni-251602" (driver="docker")
	I1005 20:45:19.070880  941739 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 20:45:19.070965  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 20:45:19.071034  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:19.088361  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:19.186060  941739 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 20:45:19.189266  941739 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 20:45:19.189348  941739 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 20:45:19.189371  941739 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 20:45:19.189382  941739 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1005 20:45:19.189396  941739 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-491115/.minikube/addons for local assets ...
	I1005 20:45:19.189452  941739 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-491115/.minikube/files for local assets ...
	I1005 20:45:19.189539  941739 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem -> 4979262.pem in /etc/ssl/certs
	I1005 20:45:19.189654  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 20:45:19.198001  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem --> /etc/ssl/certs/4979262.pem (1708 bytes)
	I1005 20:45:19.219671  941739 start.go:303] post-start completed in 148.789062ms
	I1005 20:45:19.219760  941739 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 20:45:19.219819  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:19.237287  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:19.330407  941739 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 20:45:19.334776  941739 fix.go:56] fixHost completed within 4.748135457s
	I1005 20:45:19.334813  941739 start.go:83] releasing machines lock for "newest-cni-251602", held for 4.7482043s
	I1005 20:45:19.334891  941739 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-251602
	I1005 20:45:19.351556  941739 ssh_runner.go:195] Run: cat /version.json
	I1005 20:45:19.351608  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:19.351662  941739 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 20:45:19.351741  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:19.368619  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:19.369076  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:19.550177  941739 ssh_runner.go:195] Run: systemctl --version
	I1005 20:45:19.554696  941739 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 20:45:19.559119  941739 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1005 20:45:19.576904  941739 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1005 20:45:19.576985  941739 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 20:45:19.585375  941739 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1005 20:45:19.585410  941739 start.go:469] detecting cgroup driver to use...
	I1005 20:45:19.585444  941739 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 20:45:19.585560  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 20:45:19.600124  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1005 20:45:19.609154  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1005 20:45:19.618149  941739 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1005 20:45:19.618216  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1005 20:45:19.627522  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1005 20:45:19.636836  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1005 20:45:19.646086  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1005 20:45:19.655673  941739 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1005 20:45:19.664512  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1005 20:45:19.674505  941739 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1005 20:45:19.682683  941739 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1005 20:45:19.691073  941739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:45:19.769287  941739 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1005 20:45:19.852660  941739 start.go:469] detecting cgroup driver to use...
	I1005 20:45:19.852792  941739 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 20:45:19.852882  941739 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1005 20:45:19.864848  941739 cruntime.go:277] skipping containerd shutdown because we are bound to it
	I1005 20:45:19.864918  941739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1005 20:45:19.877630  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 20:45:19.895392  941739 ssh_runner.go:195] Run: which cri-dockerd
	I1005 20:45:19.899661  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1005 20:45:19.918552  941739 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1005 20:45:19.936911  941739 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1005 20:45:20.046865  941739 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1005 20:45:20.144163  941739 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1005 20:45:20.144299  941739 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1005 20:45:20.161707  941739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:45:20.251848  941739 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1005 20:45:20.520825  941739 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1005 20:45:20.605718  941739 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1005 20:45:20.688963  941739 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1005 20:45:20.773512  941739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:45:20.854013  941739 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1005 20:45:20.867324  941739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:45:20.946882  941739 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1005 20:45:21.017496  941739 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1005 20:45:21.017569  941739 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1005 20:45:21.021797  941739 start.go:537] Will wait 60s for crictl version
	I1005 20:45:21.021856  941739 ssh_runner.go:195] Run: which crictl
	I1005 20:45:21.025426  941739 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1005 20:45:21.070905  941739 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1005 20:45:21.070975  941739 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1005 20:45:21.094936  941739 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1005 20:45:21.121912  941739 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1005 20:45:21.121999  941739 cli_runner.go:164] Run: docker network inspect newest-cni-251602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 20:45:21.138556  941739 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1005 20:45:21.142440  941739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:45:21.154570  941739 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1005 20:45:21.157976  941739 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1005 20:45:21.158071  941739 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1005 20:45:21.178251  941739 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1005 20:45:21.178278  941739 docker.go:594] Images already preloaded, skipping extraction
	I1005 20:45:21.178347  941739 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1005 20:45:21.197723  941739 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1005 20:45:21.197759  941739 cache_images.go:84] Images are preloaded, skipping loading
	I1005 20:45:21.197823  941739 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1005 20:45:21.251580  941739 cni.go:84] Creating CNI manager for ""
	I1005 20:45:21.251616  941739 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1005 20:45:21.251639  941739 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1005 20:45:21.251658  941739 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-251602 NodeName:newest-cni-251602 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1005 20:45:21.251840  941739 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-251602"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1005 20:45:21.251930  941739 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-251602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:newest-cni-251602 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1005 20:45:21.251984  941739 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1005 20:45:21.260656  941739 binaries.go:44] Found k8s binaries, skipping transfer
	I1005 20:45:21.260726  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1005 20:45:21.269056  941739 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
	I1005 20:45:21.286089  941739 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1005 20:45:21.302730  941739 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1005 20:45:21.319579  941739 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1005 20:45:21.322925  941739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:45:21.333438  941739 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602 for IP: 192.168.67.2
	I1005 20:45:21.333472  941739 certs.go:190] acquiring lock for shared ca certs: {Name:mka6627fa5c31076c5fa233a6bbda946476bff5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:45:21.333619  941739 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17363-491115/.minikube/ca.key
	I1005 20:45:21.333654  941739 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.key
	I1005 20:45:21.333737  941739 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/client.key
	I1005 20:45:21.333791  941739 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/apiserver.key.c7fa3a9e
	I1005 20:45:21.333823  941739 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/proxy-client.key
	I1005 20:45:21.333912  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926.pem (1338 bytes)
	W1005 20:45:21.333938  941739 certs.go:433] ignoring /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926_empty.pem, impossibly tiny 0 bytes
	I1005 20:45:21.333949  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem (1671 bytes)
	I1005 20:45:21.333973  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem (1082 bytes)
	I1005 20:45:21.334008  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem (1123 bytes)
	I1005 20:45:21.334047  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem (1679 bytes)
	I1005 20:45:21.334102  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem (1708 bytes)
	I1005 20:45:21.334741  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1005 20:45:21.357132  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1005 20:45:21.379412  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1005 20:45:21.402402  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1005 20:45:21.425553  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1005 20:45:21.448572  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1005 20:45:21.470803  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1005 20:45:21.492671  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1005 20:45:21.514617  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926.pem --> /usr/share/ca-certificates/497926.pem (1338 bytes)
	I1005 20:45:21.537065  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem --> /usr/share/ca-certificates/4979262.pem (1708 bytes)
	I1005 20:45:21.559657  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1005 20:45:21.582144  941739 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1005 20:45:21.598672  941739 ssh_runner.go:195] Run: openssl version
	I1005 20:45:21.604061  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/497926.pem && ln -fs /usr/share/ca-certificates/497926.pem /etc/ssl/certs/497926.pem"
	I1005 20:45:21.613694  941739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/497926.pem
	I1005 20:45:21.617122  941739 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  5 20:07 /usr/share/ca-certificates/497926.pem
	I1005 20:45:21.617186  941739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/497926.pem
	I1005 20:45:21.623795  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/497926.pem /etc/ssl/certs/51391683.0"
	I1005 20:45:21.632192  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4979262.pem && ln -fs /usr/share/ca-certificates/4979262.pem /etc/ssl/certs/4979262.pem"
	I1005 20:45:21.641540  941739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4979262.pem
	I1005 20:45:21.644804  941739 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  5 20:07 /usr/share/ca-certificates/4979262.pem
	I1005 20:45:21.644853  941739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4979262.pem
	I1005 20:45:21.651399  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4979262.pem /etc/ssl/certs/3ec20f2e.0"
	I1005 20:45:21.659734  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1005 20:45:21.668779  941739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:45:21.672400  941739 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  5 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:45:21.672473  941739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:45:21.678971  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1005 20:45:21.688374  941739 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1005 20:45:21.691701  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1005 20:45:21.698446  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1005 20:45:21.704585  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1005 20:45:21.710930  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1005 20:45:21.717269  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1005 20:45:21.723706  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1005 20:45:21.730244  941739 kubeadm.go:404] StartCluster: {Name:newest-cni-251602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-251602 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:45:21.730390  941739 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1005 20:45:21.749238  941739 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1005 20:45:21.757704  941739 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1005 20:45:21.757777  941739 kubeadm.go:636] restartCluster start
	I1005 20:45:21.757833  941739 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1005 20:45:21.766002  941739 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:21.766568  941739 kubeconfig.go:135] verify returned: extract IP: "newest-cni-251602" does not appear in /home/jenkins/minikube-integration/17363-491115/kubeconfig
	I1005 20:45:21.766798  941739 kubeconfig.go:146] "newest-cni-251602" context is missing from /home/jenkins/minikube-integration/17363-491115/kubeconfig - will repair!
	I1005 20:45:21.767178  941739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/kubeconfig: {Name:mkd6618cb8d42fbccf8ec108c3891f3e690ff249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:45:21.768584  941739 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1005 20:45:21.777081  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:21.777142  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:21.786498  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:21.786517  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:21.786555  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:21.795849  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:22.296543  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:22.296643  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:22.307113  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:22.796806  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:22.796920  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:22.807658  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:23.296196  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:23.296307  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:23.307063  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:23.796660  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:23.796750  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:23.807326  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:24.296919  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:24.297003  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:24.307595  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:24.796497  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:24.796585  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:24.807169  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:25.296770  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:25.296888  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:25.307546  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:25.796061  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:25.796166  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:25.806783  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:26.296330  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:26.296433  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:26.307074  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:26.796470  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:26.796577  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:26.806786  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:27.296331  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:27.296415  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:27.306522  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:27.796815  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:27.796927  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:27.807056  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:28.296676  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:28.296772  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:28.307093  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:28.796685  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:28.796792  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:28.807035  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:29.296656  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:29.296766  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:29.306878  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:29.796676  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:29.796758  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:29.807141  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:30.296755  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:30.296850  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:30.306907  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:30.796266  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:30.796377  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:30.806636  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:31.296136  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:31.296248  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:31.306343  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:31.778147  941739 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1005 20:45:31.778197  941739 kubeadm.go:1128] stopping kube-system containers ...
	I1005 20:45:31.778276  941739 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1005 20:45:31.799139  941739 docker.go:463] Stopping containers: [edbeda11d2dc 9f2cb55357e2 f6bfaab5a6ac 67260f7c09c8 fd1feebd6b30 dd66b2b22702 01b6b78a55a3 b4382c5ea59f 12ffa1278374 5918e2e006de 5959b2ce7826 84e04d4b3dda 2fee3456f3f2 4f3db88655d7]
	I1005 20:45:31.799221  941739 ssh_runner.go:195] Run: docker stop edbeda11d2dc 9f2cb55357e2 f6bfaab5a6ac 67260f7c09c8 fd1feebd6b30 dd66b2b22702 01b6b78a55a3 b4382c5ea59f 12ffa1278374 5918e2e006de 5959b2ce7826 84e04d4b3dda 2fee3456f3f2 4f3db88655d7
	I1005 20:45:31.819269  941739 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1005 20:45:31.831589  941739 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1005 20:45:31.840562  941739 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Oct  5 20:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Oct  5 20:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Oct  5 20:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct  5 20:44 /etc/kubernetes/scheduler.conf
	
	I1005 20:45:31.840635  941739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1005 20:45:31.848959  941739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1005 20:45:31.857521  941739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1005 20:45:31.865912  941739 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:31.865992  941739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1005 20:45:31.874539  941739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1005 20:45:31.882971  941739 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:31.883036  941739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1005 20:45:31.891165  941739 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1005 20:45:31.899809  941739 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1005 20:45:31.899844  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:31.950458  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:32.439655  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:32.588235  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:32.644120  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:32.740951  941739 api_server.go:52] waiting for apiserver process to appear ...
	I1005 20:45:32.741029  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:45:32.753615  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:45:33.330126  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:45:33.829788  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:45:33.846461  941739 api_server.go:72] duration metric: took 1.105507442s to wait for apiserver process to appear ...
	I1005 20:45:33.846542  941739 api_server.go:88] waiting for apiserver healthz status ...
	I1005 20:45:33.846578  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:33.846977  941739 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1005 20:45:33.847055  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:33.847357  941739 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1005 20:45:34.348075  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:36.627973  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1005 20:45:36.628063  941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1005 20:45:36.628087  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:36.740856  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1005 20:45:36.740956  941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 20:45:36.848296  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:36.853601  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1005 20:45:36.853628  941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 20:45:37.348237  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:37.352593  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1005 20:45:37.352618  941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 20:45:37.847923  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:37.852873  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1005 20:45:37.852902  941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 20:45:38.348152  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:38.354442  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1005 20:45:38.363755  941739 api_server.go:141] control plane version: v1.28.2
	I1005 20:45:38.363785  941739 api_server.go:131] duration metric: took 4.517223524s to wait for apiserver health ...
	I1005 20:45:38.363796  941739 cni.go:84] Creating CNI manager for ""
	I1005 20:45:38.363807  941739 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1005 20:45:38.365566  941739 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1005 20:45:38.366945  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1005 20:45:38.375605  941739 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1005 20:45:38.418968  941739 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 20:45:38.430492  941739 system_pods.go:59] 8 kube-system pods found
	I1005 20:45:38.430531  941739 system_pods.go:61] "coredns-5dd5756b68-bm584" [0aa18475-85e2-44fd-b2f3-bea8e676ae2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:45:38.430541  941739 system_pods.go:61] "etcd-newest-cni-251602" [34417493-e814-4f29-b447-2863b3cfcf27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1005 20:45:38.430550  941739 system_pods.go:61] "kube-apiserver-newest-cni-251602" [ea09c35f-5b0a-4a6f-b11c-8a5516fbd861] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1005 20:45:38.430560  941739 system_pods.go:61] "kube-controller-manager-newest-cni-251602" [4636875d-c7bf-4080-a173-2ea829bdbbc3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1005 20:45:38.430571  941739 system_pods.go:61] "kube-proxy-vtq52" [c6349e67-7d8d-4cca-9b07-1eb70a41bb60] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:45:38.430603  941739 system_pods.go:61] "kube-scheduler-newest-cni-251602" [c3320fb3-4468-4f4c-ac6e-3d3aa7c4af0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1005 20:45:38.430617  941739 system_pods.go:61] "metrics-server-57f55c9bc5-75jt5" [6455e407-161e-4abe-94a4-8fb5968789b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1005 20:45:38.430631  941739 system_pods.go:61] "storage-provisioner" [50c99270-263a-466f-ae78-8da1c3fe7545] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:45:38.430641  941739 system_pods.go:74] duration metric: took 11.652857ms to wait for pod list to return data ...
	I1005 20:45:38.430649  941739 node_conditions.go:102] verifying NodePressure condition ...
	I1005 20:45:38.435489  941739 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1005 20:45:38.435522  941739 node_conditions.go:123] node cpu capacity is 8
	I1005 20:45:38.435538  941739 node_conditions.go:105] duration metric: took 4.879676ms to run NodePressure ...
	I1005 20:45:38.435565  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:38.709413  941739 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1005 20:45:38.718207  941739 ops.go:34] apiserver oom_adj: -16
	I1005 20:45:38.718235  941739 kubeadm.go:640] restartCluster took 16.960444278s
	I1005 20:45:38.718247  941739 kubeadm.go:406] StartCluster complete in 16.988017482s
	I1005 20:45:38.718274  941739 settings.go:142] acquiring lock: {Name:mk74c5e95d8c9fcaf06097e6d304129504752ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:45:38.718351  941739 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17363-491115/kubeconfig
	I1005 20:45:38.719220  941739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/kubeconfig: {Name:mkd6618cb8d42fbccf8ec108c3891f3e690ff249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:45:38.719473  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1005 20:45:38.719630  941739 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1005 20:45:38.719714  941739 config.go:182] Loaded profile config "newest-cni-251602": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1005 20:45:38.719720  941739 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-251602"
	I1005 20:45:38.719738  941739 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-251602"
	W1005 20:45:38.719746  941739 addons.go:240] addon storage-provisioner should already be in state true
	I1005 20:45:38.719745  941739 addons.go:69] Setting metrics-server=true in profile "newest-cni-251602"
	I1005 20:45:38.719747  941739 addons.go:69] Setting default-storageclass=true in profile "newest-cni-251602"
	I1005 20:45:38.719763  941739 addons.go:231] Setting addon metrics-server=true in "newest-cni-251602"
	W1005 20:45:38.719772  941739 addons.go:240] addon metrics-server should already be in state true
	I1005 20:45:38.719786  941739 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-251602"
	I1005 20:45:38.719799  941739 host.go:66] Checking if "newest-cni-251602" exists ...
	I1005 20:45:38.719813  941739 host.go:66] Checking if "newest-cni-251602" exists ...
	I1005 20:45:38.719800  941739 addons.go:69] Setting dashboard=true in profile "newest-cni-251602"
	I1005 20:45:38.719834  941739 addons.go:231] Setting addon dashboard=true in "newest-cni-251602"
	W1005 20:45:38.719843  941739 addons.go:240] addon dashboard should already be in state true
	I1005 20:45:38.719903  941739 host.go:66] Checking if "newest-cni-251602" exists ...
	I1005 20:45:38.720124  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:38.720279  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:38.720282  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:38.720344  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:38.723756  941739 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-251602" context rescaled to 1 replicas
	I1005 20:45:38.723804  941739 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1005 20:45:38.727049  941739 out.go:177] * Verifying Kubernetes components...
	I1005 20:45:38.728767  941739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:45:38.743950  941739 addons.go:231] Setting addon default-storageclass=true in "newest-cni-251602"
	W1005 20:45:38.744161  941739 addons.go:240] addon default-storageclass should already be in state true
	I1005 20:45:38.744212  941739 host.go:66] Checking if "newest-cni-251602" exists ...
	I1005 20:45:38.744748  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:38.761605  941739 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1005 20:45:38.762994  941739 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1005 20:45:38.764361  941739 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1005 20:45:38.762962  941739 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1005 20:45:38.766924  941739 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:45:38.765746  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1005 20:45:38.765763  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1005 20:45:38.768302  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:38.768331  941739 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 20:45:38.768349  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1005 20:45:38.768396  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:38.768481  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1005 20:45:38.768528  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:38.771651  941739 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1005 20:45:38.771678  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1005 20:45:38.771838  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:38.792082  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:38.798427  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:38.803117  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:38.806797  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:38.847194  941739 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1005 20:45:38.847278  941739 api_server.go:52] waiting for apiserver process to appear ...
	I1005 20:45:38.847344  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:45:38.926982  941739 api_server.go:72] duration metric: took 203.134329ms to wait for apiserver process to appear ...
	I1005 20:45:38.927013  941739 api_server.go:88] waiting for apiserver healthz status ...
	I1005 20:45:38.927033  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:38.931963  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1005 20:45:38.933196  941739 api_server.go:141] control plane version: v1.28.2
	I1005 20:45:38.933257  941739 api_server.go:131] duration metric: took 6.235518ms to wait for apiserver health ...
	I1005 20:45:38.933268  941739 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 20:45:38.938837  941739 system_pods.go:59] 8 kube-system pods found
	I1005 20:45:38.938869  941739 system_pods.go:61] "coredns-5dd5756b68-bm584" [0aa18475-85e2-44fd-b2f3-bea8e676ae2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:45:38.938882  941739 system_pods.go:61] "etcd-newest-cni-251602" [34417493-e814-4f29-b447-2863b3cfcf27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1005 20:45:38.938893  941739 system_pods.go:61] "kube-apiserver-newest-cni-251602" [ea09c35f-5b0a-4a6f-b11c-8a5516fbd861] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1005 20:45:38.938906  941739 system_pods.go:61] "kube-controller-manager-newest-cni-251602" [4636875d-c7bf-4080-a173-2ea829bdbbc3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1005 20:45:38.938913  941739 system_pods.go:61] "kube-proxy-vtq52" [c6349e67-7d8d-4cca-9b07-1eb70a41bb60] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:45:38.938919  941739 system_pods.go:61] "kube-scheduler-newest-cni-251602" [c3320fb3-4468-4f4c-ac6e-3d3aa7c4af0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1005 20:45:38.938932  941739 system_pods.go:61] "metrics-server-57f55c9bc5-75jt5" [6455e407-161e-4abe-94a4-8fb5968789b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1005 20:45:38.938943  941739 system_pods.go:61] "storage-provisioner" [50c99270-263a-466f-ae78-8da1c3fe7545] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:45:38.938955  941739 system_pods.go:74] duration metric: took 5.679606ms to wait for pod list to return data ...
	I1005 20:45:38.938967  941739 default_sa.go:34] waiting for default service account to be created ...
	I1005 20:45:38.941596  941739 default_sa.go:45] found service account: "default"
	I1005 20:45:38.941625  941739 default_sa.go:55] duration metric: took 2.647466ms for default service account to be created ...
	I1005 20:45:38.941638  941739 kubeadm.go:581] duration metric: took 217.801105ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1005 20:45:38.941657  941739 node_conditions.go:102] verifying NodePressure condition ...
	I1005 20:45:38.944359  941739 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1005 20:45:38.944385  941739 node_conditions.go:123] node cpu capacity is 8
	I1005 20:45:38.944399  941739 node_conditions.go:105] duration metric: took 2.735534ms to run NodePressure ...
	I1005 20:45:38.944414  941739 start.go:228] waiting for startup goroutines ...
	I1005 20:45:39.031121  941739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 20:45:39.031835  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1005 20:45:39.031864  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1005 20:45:39.037663  941739 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1005 20:45:39.037689  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1005 20:45:39.038028  941739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1005 20:45:39.052055  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1005 20:45:39.052084  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1005 20:45:39.122929  941739 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1005 20:45:39.122960  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1005 20:45:39.135708  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1005 20:45:39.135738  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1005 20:45:39.148797  941739 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1005 20:45:39.148828  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1005 20:45:39.233123  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1005 20:45:39.233156  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1005 20:45:39.246996  941739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1005 20:45:39.325634  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1005 20:45:39.325672  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1005 20:45:39.348115  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1005 20:45:39.348137  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1005 20:45:39.436685  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1005 20:45:39.436712  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1005 20:45:39.528259  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1005 20:45:39.528287  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1005 20:45:39.547672  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1005 20:45:39.547706  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1005 20:45:39.565975  941739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1005 20:45:40.443947  941739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.412776284s)
	I1005 20:45:40.444070  941739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.406004214s)
	I1005 20:45:40.571364  941739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.324316067s)
	I1005 20:45:40.571417  941739 addons.go:467] Verifying addon metrics-server=true in "newest-cni-251602"
	I1005 20:45:40.917851  941739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.351821533s)
	I1005 20:45:40.919845  941739 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-251602 addons enable metrics-server	
	
	
	I1005 20:45:40.921418  941739 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1005 20:45:40.922771  941739 addons.go:502] enable addons completed in 2.203154287s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1005 20:45:40.922805  941739 start.go:233] waiting for cluster config update ...
	I1005 20:45:40.922816  941739 start.go:242] writing updated cluster config ...
	I1005 20:45:40.923059  941739 ssh_runner.go:195] Run: rm -f paused
	I1005 20:45:40.970862  941739 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1005 20:45:40.972904  941739 out.go:177] * Done! kubectl is now configured to use "newest-cni-251602" cluster and "default" namespace by default
	I1005 20:46:27.657681  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:46:27.657726  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:46:27.657736  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:46:27.657741  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:46:27.657747  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:46:27.657753  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:46:27.657758  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:46:27.657766  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:46:27.660128  848852 out.go:177] 
	W1005 20:46:27.662010  848852 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns, kube-proxy
	W1005 20:46:27.662024  848852 out.go:239] * 
	W1005 20:46:27.662801  848852 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1005 20:46:27.665143  848852 out.go:177] 
	
	* 
	* ==> Docker <==
	* Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.741750210Z" level=info msg="Loading containers: start."
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.835828175Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.873800369Z" level=info msg="Loading containers: done."
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.883733188Z" level=info msg="Docker daemon" commit=1a79695 graphdriver=overlay2 version=24.0.6
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.883797436Z" level=info msg="Daemon has completed initialization"
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.908207956Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.908224826Z" level=info msg="API listen on [::]:2376"
	Oct 05 20:37:00 old-k8s-version-330869 systemd[1]: Started Docker Application Container Engine.
	Oct 05 20:37:04 old-k8s-version-330869 systemd[1]: Stopping Docker Application Container Engine...
	Oct 05 20:37:04 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:04.385384032Z" level=info msg="Processing signal 'terminated'"
	Oct 05 20:37:04 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:04.387134388Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 05 20:37:04 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:04.388068483Z" level=info msg="Daemon shutdown complete"
	Oct 05 20:37:04 old-k8s-version-330869 systemd[1]: docker.service: Deactivated successfully.
	Oct 05 20:37:04 old-k8s-version-330869 systemd[1]: Stopped Docker Application Container Engine.
	Oct 05 20:37:04 old-k8s-version-330869 systemd[1]: Starting Docker Application Container Engine...
	Oct 05 20:37:04 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:04.451611505Z" level=info msg="Starting up"
	Oct 05 20:37:04 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:04.461647092Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 05 20:37:06 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:06.744132135Z" level=info msg="Loading containers: start."
	Oct 05 20:37:06 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:06.839612411Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.001066181Z" level=info msg="Loading containers: done."
	Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.016241859Z" level=info msg="Docker daemon" commit=1a79695 graphdriver=overlay2 version=24.0.6
	Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.016300931Z" level=info msg="Daemon has completed initialization"
	Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.039742052Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.039779377Z" level=info msg="API listen on [::]:2376"
	Oct 05 20:37:07 old-k8s-version-330869 systemd[1]: Started Docker Application Container Engine.
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	56472dff5f81f       6e38f40d628db       8 minutes ago       Running             storage-provisioner       0                   64075514dc163       storage-provisioner
	9f0be33584868       bf261d1579144       8 minutes ago       Running             coredns                   0                   2e7135e437f0c       coredns-5644d7b6d9-k2f47
	cef84f5b51c49       c21b0c7400f98       8 minutes ago       Running             kube-proxy                0                   a228f4c03cdba       kube-proxy-n9cwb
	530e42b9f6c77       b2756210eeabf       9 minutes ago       Running             etcd                      0                   7ab4e42c79a68       etcd-old-k8s-version-330869
	6c66019a6e010       06a629a7e51cd       9 minutes ago       Running             kube-controller-manager   0                   e87f561b15eaf       kube-controller-manager-old-k8s-version-330869
	a576da8318f84       301ddc62b80b1       9 minutes ago       Running             kube-scheduler            0                   84631805dc0e9       kube-scheduler-old-k8s-version-330869
	91420fd2d357f       b305571ca60a5       9 minutes ago       Running             kube-apiserver            0                   a2a65ce6717dd       kube-apiserver-old-k8s-version-330869
	
	* 
	* ==> coredns [9f0be3358486] <==
	* .:53
	2023-10-05T20:37:40.506Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-10-05T20:37:40.507Z [INFO] CoreDNS-1.6.2
	2023-10-05T20:37:40.507Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-330869
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-330869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53
	                    minikube.k8s.io/name=old-k8s-version-330869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_05T20_37_23_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Oct 2023 20:37:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Oct 2023 20:46:20 +0000   Thu, 05 Oct 2023 20:37:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Oct 2023 20:46:20 +0000   Thu, 05 Oct 2023 20:37:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Oct 2023 20:46:20 +0000   Thu, 05 Oct 2023 20:37:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 05 Oct 2023 20:46:20 +0000   Thu, 05 Oct 2023 20:40:49 +0000   KubeletNotReady              PLEG is not healthy: pleg was last seen active 8m40.67992945s ago; threshold is 3m0s
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-330869
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304681132Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32859420Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304681132Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32859420Ki
	 pods:               110
	System Info:
	 Machine ID:                 da3d4e78336e4de3801cc5f1121e363a
	 System UUID:                fb98631f-d977-49f6-8d13-47582452d2b5
	 Boot ID:                    1c650140-d8f3-4a50-ac83-e0e6baf94598
	 Kernel Version:             5.15.0-1044-gcp
	 OS Image:                   Ubuntu 22.04.3 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (7 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-k2f47                          100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     8m50s
	  kube-system                etcd-old-k8s-version-330869                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m48s
	  kube-system                kube-apiserver-old-k8s-version-330869             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                kube-controller-manager-old-k8s-version-330869    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m47s
	  kube-system                kube-proxy-n9cwb                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m50s
	  kube-system                kube-scheduler-old-k8s-version-330869             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m47s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                Message
	  ----    ------                   ----                   ----                                -------
	  Normal  NodeHasSufficientMemory  9m16s (x8 over 9m16s)  kubelet, old-k8s-version-330869     Node old-k8s-version-330869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s (x8 over 9m16s)  kubelet, old-k8s-version-330869     Node old-k8s-version-330869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s (x7 over 9m16s)  kubelet, old-k8s-version-330869     Node old-k8s-version-330869 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m48s                  kube-proxy, old-k8s-version-330869  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e ee c2 a6 29 ac 08 06
	[Oct 5 20:39] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev bridge
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 fb 5f c9 9e d7 08 06
	[  +0.715332] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e 50 38 95 e7 63 08 06
	[  +8.065920] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 69 10 43 1f 0b 08 06
	[ +16.180606] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 6a 13 59 d9 da 08 06
	[Oct 5 20:43] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff ae 52 50 a9 6f 53 08 06
	[Oct 5 20:44] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 46 f6 d1 58 d2 08 06
	[ +19.224580] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e 9e 9c 80 d0 43 08 06
	[  +8.732079] IPv4: martian source 10.244.0.1 from 10.244.0.9, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 96 3b 8d 2f b2 6f 08 06
	[  +1.563207] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 1a 9a 54 1a fc 08 06
	[  +5.814222] IPv4: martian source 10.244.0.1 from 10.244.0.9, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea e4 45 a0 bd b2 08 06
	[Oct 5 20:45] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f2 32 f7 4c 9e 13 08 06
	[ +35.890083] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba 7a 81 76 84 0a 08 06
	
	* 
	* ==> etcd [530e42b9f6c7] <==
	* 2023-10-05 20:37:13.535147 I | raft: 9f0758e1c58a86ed became follower at term 0
	2023-10-05 20:37:13.535157 I | raft: newRaft 9f0758e1c58a86ed [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-10-05 20:37:13.535162 I | raft: 9f0758e1c58a86ed became follower at term 1
	2023-10-05 20:37:13.540464 W | auth: simple token is not cryptographically signed
	2023-10-05 20:37:13.543597 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-10-05 20:37:13.544562 I | etcdserver: 9f0758e1c58a86ed as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-05 20:37:13.544945 I | etcdserver/membership: added member 9f0758e1c58a86ed [https://192.168.85.2:2380] to cluster 68eaea490fab4e05
	2023-10-05 20:37:13.546416 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-05 20:37:13.546557 I | embed: listening for metrics on http://192.168.85.2:2381
	2023-10-05 20:37:13.546670 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-05 20:37:14.535543 I | raft: 9f0758e1c58a86ed is starting a new election at term 1
	2023-10-05 20:37:14.535587 I | raft: 9f0758e1c58a86ed became candidate at term 2
	2023-10-05 20:37:14.535619 I | raft: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	2023-10-05 20:37:14.535634 I | raft: 9f0758e1c58a86ed became leader at term 2
	2023-10-05 20:37:14.535644 I | raft: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2023-10-05 20:37:14.535840 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-05 20:37:14.536888 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-05 20:37:14.536932 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-05 20:37:14.536948 I | embed: ready to serve client requests
	2023-10-05 20:37:14.536984 I | etcdserver: published {Name:old-k8s-version-330869 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2023-10-05 20:37:14.537012 I | embed: ready to serve client requests
	2023-10-05 20:37:14.538573 I | embed: serving client requests on 192.168.85.2:2379
	2023-10-05 20:37:14.538614 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-05 20:37:52.212401 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-k2f47\" " with result "range_response_count:1 size:1693" took too long (122.897349ms) to execute
	2023-10-05 20:37:52.212479 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:215" took too long (125.865716ms) to execute
	
	* 
	* ==> kernel <==
	*  20:46:28 up  2:28,  0 users,  load average: 2.29, 2.74, 2.83
	Linux old-k8s-version-330869 5.15.0-1044-gcp #52~20.04.1-Ubuntu SMP Wed Sep 20 16:25:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [91420fd2d357] <==
	* I1005 20:37:18.648312       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
	E1005 20:37:18.650296       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.85.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1005 20:37:18.651420       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
	I1005 20:37:18.651503       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1005 20:37:18.747656       1 cache.go:39] Caches are synced for autoregister controller
	I1005 20:37:18.749173       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I1005 20:37:18.762082       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1005 20:37:18.762123       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1005 20:37:18.844252       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1005 20:37:19.647651       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I1005 20:37:19.647684       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1005 20:37:19.647699       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1005 20:37:19.651492       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I1005 20:37:19.654366       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I1005 20:37:19.654390       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1005 20:37:21.429015       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1005 20:37:21.708924       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1005 20:37:22.050878       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1005 20:37:22.051722       1 controller.go:606] quota admission added evaluator for: endpoints
	I1005 20:37:22.934960       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1005 20:37:23.322631       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1005 20:37:23.671803       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1005 20:37:38.427328       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1005 20:37:38.453764       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1005 20:37:38.539579       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [6c66019a6e01] <==
	* I1005 20:37:38.466241       1 shared_informer.go:204] Caches are synced for HPA 
	I1005 20:37:38.482277       1 shared_informer.go:204] Caches are synced for stateful set 
	I1005 20:37:38.487462       1 shared_informer.go:204] Caches are synced for ReplicationController 
	I1005 20:37:38.487721       1 shared_informer.go:204] Caches are synced for GC 
	I1005 20:37:38.487734       1 shared_informer.go:204] Caches are synced for PVC protection 
	I1005 20:37:38.487705       1 shared_informer.go:204] Caches are synced for attach detach 
	I1005 20:37:38.512949       1 shared_informer.go:204] Caches are synced for ReplicaSet 
	I1005 20:37:38.537588       1 shared_informer.go:204] Caches are synced for deployment 
	I1005 20:37:38.537985       1 shared_informer.go:204] Caches are synced for resource quota 
	I1005 20:37:38.542695       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"59e37402-092c-492a-8a24-0e86b565f6d7", APIVersion:"apps/v1", ResourceVersion:"192", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2
	I1005 20:37:38.550402       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"09eba7cb-b0a0-47ac-8889-0bda35b1e552", APIVersion:"apps/v1", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-k2f47
	I1005 20:37:38.553758       1 shared_informer.go:204] Caches are synced for expand 
	I1005 20:37:38.559594       1 shared_informer.go:204] Caches are synced for resource quota 
	I1005 20:37:38.560551       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"09eba7cb-b0a0-47ac-8889-0bda35b1e552", APIVersion:"apps/v1", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-wmjhd
	I1005 20:37:38.588856       1 shared_informer.go:204] Caches are synced for disruption 
	I1005 20:37:38.588883       1 disruption.go:341] Sending events to api server.
	I1005 20:37:38.605013       1 shared_informer.go:204] Caches are synced for persistent volume 
	I1005 20:37:38.646306       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1005 20:37:38.681988       1 shared_informer.go:204] Caches are synced for service account 
	I1005 20:37:38.683218       1 shared_informer.go:204] Caches are synced for namespace 
	I1005 20:37:38.686440       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1005 20:37:38.686460       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1005 20:37:38.941357       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"59e37402-092c-492a-8a24-0e86b565f6d7", APIVersion:"apps/v1", ResourceVersion:"341", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-5644d7b6d9 to 1
	I1005 20:37:38.999527       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"09eba7cb-b0a0-47ac-8889-0bda35b1e552", APIVersion:"apps/v1", ResourceVersion:"342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-5644d7b6d9-wmjhd
	I1005 20:40:53.441120       1 node_lifecycle_controller.go:1058] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	
	* 
	* ==> kube-proxy [cef84f5b51c4] <==
	* W1005 20:37:40.040149       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1005 20:37:40.052818       1 node.go:135] Successfully retrieved node IP: 192.168.85.2
	I1005 20:37:40.052863       1 server_others.go:149] Using iptables Proxier.
	I1005 20:37:40.053425       1 server.go:529] Version: v1.16.0
	I1005 20:37:40.053948       1 config.go:131] Starting endpoints config controller
	I1005 20:37:40.053980       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1005 20:37:40.054066       1 config.go:313] Starting service config controller
	I1005 20:37:40.054081       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1005 20:37:40.157367       1 shared_informer.go:204] Caches are synced for service config 
	I1005 20:37:40.224240       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [a576da8318f8] <==
	* W1005 20:37:18.743643       1 authentication.go:79] Authentication is disabled
	I1005 20:37:18.743716       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1005 20:37:18.744158       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1005 20:37:18.842482       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1005 20:37:18.843126       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 20:37:18.843177       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 20:37:18.843316       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1005 20:37:18.843330       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1005 20:37:18.843386       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1005 20:37:18.843440       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1005 20:37:18.843521       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1005 20:37:18.843846       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1005 20:37:18.844748       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1005 20:37:18.927555       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1005 20:37:19.843756       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1005 20:37:19.844669       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 20:37:19.845775       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 20:37:19.846714       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1005 20:37:19.850359       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1005 20:37:19.919069       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1005 20:37:19.919980       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1005 20:37:19.927483       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1005 20:37:19.928788       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1005 20:37:19.929881       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1005 20:37:19.931691       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* Oct 05 20:44:26 old-k8s-version-330869 kubelet[2004]: I1005 20:44:26.447919    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 6m46.630387826s ago; threshold is 3m0s
	Oct 05 20:44:31 old-k8s-version-330869 kubelet[2004]: I1005 20:44:31.448136    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 6m51.630595657s ago; threshold is 3m0s
	Oct 05 20:44:36 old-k8s-version-330869 kubelet[2004]: I1005 20:44:36.448951    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 6m56.631397997s ago; threshold is 3m0s
	Oct 05 20:44:41 old-k8s-version-330869 kubelet[2004]: I1005 20:44:41.449179    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m1.631643258s ago; threshold is 3m0s
	Oct 05 20:44:46 old-k8s-version-330869 kubelet[2004]: I1005 20:44:46.449442    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m6.631908491s ago; threshold is 3m0s
	Oct 05 20:44:51 old-k8s-version-330869 kubelet[2004]: I1005 20:44:51.449668    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m11.632131918s ago; threshold is 3m0s
	Oct 05 20:44:56 old-k8s-version-330869 kubelet[2004]: I1005 20:44:56.449897    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m16.63236401s ago; threshold is 3m0s
	Oct 05 20:45:01 old-k8s-version-330869 kubelet[2004]: I1005 20:45:01.450139    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m21.63260354s ago; threshold is 3m0s
	Oct 05 20:45:06 old-k8s-version-330869 kubelet[2004]: I1005 20:45:06.450376    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m26.632844967s ago; threshold is 3m0s
	Oct 05 20:45:11 old-k8s-version-330869 kubelet[2004]: I1005 20:45:11.450572    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m31.63303949s ago; threshold is 3m0s
	Oct 05 20:45:16 old-k8s-version-330869 kubelet[2004]: I1005 20:45:16.450802    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m36.633267162s ago; threshold is 3m0s
	Oct 05 20:45:21 old-k8s-version-330869 kubelet[2004]: I1005 20:45:21.451035    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m41.6335025s ago; threshold is 3m0s
	Oct 05 20:45:26 old-k8s-version-330869 kubelet[2004]: I1005 20:45:26.451262    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m46.633727726s ago; threshold is 3m0s
	Oct 05 20:45:31 old-k8s-version-330869 kubelet[2004]: I1005 20:45:31.451519    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m51.633961825s ago; threshold is 3m0s
	Oct 05 20:45:36 old-k8s-version-330869 kubelet[2004]: I1005 20:45:36.451815    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m56.634284171s ago; threshold is 3m0s
	Oct 05 20:45:41 old-k8s-version-330869 kubelet[2004]: I1005 20:45:41.452058    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m1.634517734s ago; threshold is 3m0s
	Oct 05 20:45:46 old-k8s-version-330869 kubelet[2004]: I1005 20:45:46.452320    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m6.634787132s ago; threshold is 3m0s
	Oct 05 20:45:51 old-k8s-version-330869 kubelet[2004]: I1005 20:45:51.452654    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m11.635118625s ago; threshold is 3m0s
	Oct 05 20:45:56 old-k8s-version-330869 kubelet[2004]: I1005 20:45:56.452929    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m16.635390007s ago; threshold is 3m0s
	Oct 05 20:46:01 old-k8s-version-330869 kubelet[2004]: I1005 20:46:01.453256    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m21.635701631s ago; threshold is 3m0s
	Oct 05 20:46:06 old-k8s-version-330869 kubelet[2004]: I1005 20:46:06.453491    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m26.635958787s ago; threshold is 3m0s
	Oct 05 20:46:11 old-k8s-version-330869 kubelet[2004]: I1005 20:46:11.453759    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m31.636224705s ago; threshold is 3m0s
	Oct 05 20:46:16 old-k8s-version-330869 kubelet[2004]: I1005 20:46:16.454007    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m36.636470752s ago; threshold is 3m0s
	Oct 05 20:46:21 old-k8s-version-330869 kubelet[2004]: I1005 20:46:21.454246    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m41.636710535s ago; threshold is 3m0s
	Oct 05 20:46:26 old-k8s-version-330869 kubelet[2004]: I1005 20:46:26.454534    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m46.636999608s ago; threshold is 3m0s
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-330869 -n old-k8s-version-330869
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-330869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-5644d7b6d9-k2f47 kube-proxy-n9cwb storage-provisioner
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-330869 describe pod coredns-5644d7b6d9-k2f47 kube-proxy-n9cwb storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-330869 describe pod coredns-5644d7b6d9-k2f47 kube-proxy-n9cwb storage-provisioner: exit status 1 (60.333071ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-k2f47" not found
	Error from server (NotFound): pods "kube-proxy-n9cwb" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-330869 describe pod coredns-5644d7b6d9-k2f47 kube-proxy-n9cwb storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (581.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (483.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-330869 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f8bda2b8-a4ff-4a22-bcdd-86323959b312] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
E1005 20:46:54.282896  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:47:12.585667  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:47:24.174011  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
E1005 20:47:35.778722  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:47:40.269288  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:47:55.239578  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 20:48:03.461826  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:48:35.802717  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:48:35.808028  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:48:35.818310  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:48:35.838623  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:48:35.878927  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:48:35.959283  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:48:36.119704  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:48:36.440343  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:48:37.080882  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:48:38.361151  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:48:40.922188  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:48:43.973330  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
E1005 20:48:45.237396  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:48:46.043327  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:48:50.219966  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
E1005 20:48:51.013960  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:48:51.019255  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:48:51.029569  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:48:51.049912  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:48:51.090212  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:48:51.170666  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:48:51.331119  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:48:51.651554  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:48:52.292522  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:48:53.573156  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:48:56.133482  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:48:56.283783  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:49:01.254210  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:49:11.495121  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:49:16.764972  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:49:31.975877  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:49:51.412972  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
E1005 20:49:52.191806  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 20:49:57.726131  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:50:07.017549  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
E1005 20:50:12.936164  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:50:13.020519  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:50:14.451380  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:50:27.221114  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
E1005 20:50:42.816629  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:50:43.872356  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:51:19.647264  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:51:26.598820  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:51:34.857363  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:52:12.585994  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:52:24.174013  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
E1005 20:52:35.778503  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f8bda2b8-a4ff-4a22-bcdd-86323959b312] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1005 20:53:35.801974  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:53:43.973100  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
E1005 20:53:45.238443  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:53:50.220090  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
E1005 20:53:51.013693  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:54:03.488106  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:54:18.698368  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
start_stop_delete_test.go:196: ***** TestStartStop/group/old-k8s-version/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-330869 -n old-k8s-version-330869
start_stop_delete_test.go:196: TestStartStop/group/old-k8s-version/serial/DeployApp: showing logs for failed pods as of 2023-10-05 20:54:29.84027645 +0000 UTC m=+3096.428326578
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-330869 describe po busybox -n default
start_stop_delete_test.go:196: (dbg) kubectl --context old-k8s-version-330869 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             old-k8s-version-330869/192.168.85.2
Start Time:       Thu, 05 Oct 2023 20:52:42 +0000
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
busybox:
Container ID:  
Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
3600
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r58hx (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
default-token-r58hx:
Type:        Secret (a volume populated by a Secret)
SecretName:  default-token-r58hx
Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  8m                  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
Warning  FailedScheduling  6m41s (x1 over 8m)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
Normal   Scheduled         107s                default-scheduler  Successfully assigned default/busybox to old-k8s-version-330869
Normal   Pulling           106s                kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal   Pulled            105s                kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal   Created           105s                kubelet            Created container busybox
Normal   Started           105s                kubelet            Started container busybox
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-330869 logs busybox -n default
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-330869 logs busybox -n default: exit status 1 (70.530409ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-330869 logs busybox -n default: exit status 1
start_stop_delete_test.go:196: wait: integration-test=busybox within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-330869
helpers_test.go:235: (dbg) docker inspect old-k8s-version-330869:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9",
	        "Created": "2023-10-05T20:36:53.706444621Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 850463,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-05T20:36:54.08354438Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:94671ba3754e2c6976414eaf20a0c7861a5d2f9fc631e1161e8ab0ded9062c52",
	        "ResolvConfPath": "/var/lib/docker/containers/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9/hostname",
	        "HostsPath": "/var/lib/docker/containers/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9/hosts",
	        "LogPath": "/var/lib/docker/containers/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9-json.log",
	        "Name": "/old-k8s-version-330869",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-330869:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-330869",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0d686d9896e444b3737621f3bb0399b6dd3266180ac56b4497ffd2325577edca-init/diff:/var/lib/docker/overlay2/e65b3f74dc6bfb6767eea300df98bf2be99245c1b234ea43800cf021cd81177d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d686d9896e444b3737621f3bb0399b6dd3266180ac56b4497ffd2325577edca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d686d9896e444b3737621f3bb0399b6dd3266180ac56b4497ffd2325577edca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d686d9896e444b3737621f3bb0399b6dd3266180ac56b4497ffd2325577edca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-330869",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-330869/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-330869",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-330869",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-330869",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e9d5900763ffac860582f91e1cc24789bad5009ed40771fbeb5d999159eee780",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33383"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33382"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33379"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33381"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33380"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e9d5900763ff",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-330869": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0ffb18ccd18d",
	                        "old-k8s-version-330869"
	                    ],
	                    "NetworkID": "b2ec8c9cc8a493d14667efb735586eda5a96dcf492505b426d598dbb05a7c972",
	                    "EndpointID": "75ea406fffba6499772cde5d775de2d6bc83b43060d83c230ed678cdae12bc5e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330869 -n old-k8s-version-330869
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-330869 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p embed-certs-411409 sudo                             | embed-certs-411409           | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-411409                                  | embed-certs-411409           | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-411409                                  | embed-certs-411409           | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-411409                                  | embed-certs-411409           | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	| delete  | -p embed-certs-411409                                  | embed-certs-411409           | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	| start   | -p newest-cni-251602 --memory=2200 --alsologtostderr   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:45 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=docker            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p                                                     | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | default-k8s-diff-port-973002                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | default-k8s-diff-port-973002                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | default-k8s-diff-port-973002                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | default-k8s-diff-port-973002                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | default-k8s-diff-port-973002                           |                              |         |         |                     |                     |
	| ssh     | -p no-preload-477708 sudo                              | no-preload-477708            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p no-preload-477708                                   | no-preload-477708            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-477708                                   | no-preload-477708            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-477708                                   | no-preload-477708            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	| delete  | -p no-preload-477708                                   | no-preload-477708            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	| addons  | enable metrics-server -p newest-cni-251602             | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-251602                                   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-251602                  | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-251602 --memory=2200 --alsologtostderr   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=docker            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-251602 sudo                              | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-251602                                   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-251602                                   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-251602                                   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	| delete  | -p newest-cni-251602                                   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 20:45:14
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 20:45:14.405012  941739 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:45:14.405318  941739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:45:14.405332  941739 out.go:309] Setting ErrFile to fd 2...
	I1005 20:45:14.405338  941739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:45:14.405563  941739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-491115/.minikube/bin
	I1005 20:45:14.406125  941739 out.go:303] Setting JSON to false
	I1005 20:45:14.408036  941739 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8863,"bootTime":1696529852,"procs":691,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:45:14.408112  941739 start.go:138] virtualization: kvm guest
	I1005 20:45:14.411041  941739 out.go:177] * [newest-cni-251602] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:45:14.412825  941739 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:45:14.414496  941739 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:45:14.412885  941739 notify.go:220] Checking for updates...
	I1005 20:45:14.417488  941739 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig
	I1005 20:45:14.419444  941739 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube
	I1005 20:45:14.420812  941739 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:45:14.422387  941739 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:45:14.424417  941739 config.go:182] Loaded profile config "newest-cni-251602": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1005 20:45:14.424920  941739 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:45:14.447137  941739 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:45:14.447233  941739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:45:14.502313  941739 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-05 20:45:14.492746667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:45:14.502465  941739 docker.go:294] overlay module found
	I1005 20:45:14.504743  941739 out.go:177] * Using the docker driver based on existing profile
	I1005 20:45:14.506376  941739 start.go:298] selected driver: docker
	I1005 20:45:14.506399  941739 start.go:902] validating driver "docker" against &{Name:newest-cni-251602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-251602 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:45:14.506507  941739 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:45:14.507273  941739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:45:14.559655  941739 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-05 20:45:14.550952004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:45:14.560012  941739 start_flags.go:942] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1005 20:45:14.560046  941739 cni.go:84] Creating CNI manager for ""
	I1005 20:45:14.560066  941739 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1005 20:45:14.560079  941739 start_flags.go:321] config:
	{Name:newest-cni-251602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-251602 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:45:14.562326  941739 out.go:177] * Starting control plane node newest-cni-251602 in cluster newest-cni-251602
	I1005 20:45:14.565495  941739 cache.go:122] Beginning downloading kic base image for docker with docker
	I1005 20:45:14.567000  941739 out.go:177] * Pulling base image ...
	I1005 20:45:14.568566  941739 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1005 20:45:14.568620  941739 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1005 20:45:14.568631  941739 cache.go:57] Caching tarball of preloaded images
	I1005 20:45:14.568707  941739 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 20:45:14.568717  941739 preload.go:174] Found /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1005 20:45:14.568791  941739 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1005 20:45:14.568916  941739 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/config.json ...
	I1005 20:45:14.586420  941739 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1005 20:45:14.586452  941739 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1005 20:45:14.586477  941739 cache.go:195] Successfully downloaded all kic artifacts
	I1005 20:45:14.586522  941739 start.go:365] acquiring machines lock for newest-cni-251602: {Name:mkefe4baf7b8136c10dd9c20a98860ec3c495766 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:45:14.586596  941739 start.go:369] acquired machines lock for "newest-cni-251602" in 47.72µs
	I1005 20:45:14.586622  941739 start.go:96] Skipping create...Using existing machine configuration
	I1005 20:45:14.586642  941739 fix.go:54] fixHost starting: 
	I1005 20:45:14.587273  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:14.605317  941739 fix.go:102] recreateIfNeeded on newest-cni-251602: state=Stopped err=<nil>
	W1005 20:45:14.605354  941739 fix.go:128] unexpected machine state, will restart: <nil>
	I1005 20:45:14.607609  941739 out.go:177] * Restarting existing docker container for "newest-cni-251602" ...
	I1005 20:45:15.417486  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:45:15.417531  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:45:15.417542  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:45:15.417550  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:45:15.417558  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:45:15.417565  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:45:15.417579  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:45:15.417589  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:45:15.417621  848852 retry.go:31] will retry after 1m12.232820849s: missing components: kube-dns, kube-proxy
	I1005 20:45:14.609066  941739 cli_runner.go:164] Run: docker start newest-cni-251602
	I1005 20:45:14.897686  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:14.916217  941739 kic.go:426] container "newest-cni-251602" state is running.
	I1005 20:45:14.916594  941739 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-251602
	I1005 20:45:14.935722  941739 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/config.json ...
	I1005 20:45:14.935987  941739 machine.go:88] provisioning docker machine ...
	I1005 20:45:14.936015  941739 ubuntu.go:169] provisioning hostname "newest-cni-251602"
	I1005 20:45:14.936080  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:14.954269  941739 main.go:141] libmachine: Using SSH client type: native
	I1005 20:45:14.954655  941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1005 20:45:14.954675  941739 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-251602 && echo "newest-cni-251602" | sudo tee /etc/hostname
	I1005 20:45:14.955367  941739 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60694->127.0.0.1:33423: read: connection reset by peer
	I1005 20:45:18.101383  941739 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-251602
	
	I1005 20:45:18.101493  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:18.118632  941739 main.go:141] libmachine: Using SSH client type: native
	I1005 20:45:18.118970  941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1005 20:45:18.118988  941739 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-251602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-251602/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-251602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 20:45:18.254181  941739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 20:45:18.254212  941739 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-491115/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-491115/.minikube}
	I1005 20:45:18.254247  941739 ubuntu.go:177] setting up certificates
	I1005 20:45:18.254259  941739 provision.go:83] configureAuth start
	I1005 20:45:18.254314  941739 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-251602
	I1005 20:45:18.271133  941739 provision.go:138] copyHostCerts
	I1005 20:45:18.271209  941739 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem, removing ...
	I1005 20:45:18.271225  941739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem
	I1005 20:45:18.271301  941739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem (1082 bytes)
	I1005 20:45:18.271415  941739 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem, removing ...
	I1005 20:45:18.271430  941739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem
	I1005 20:45:18.271455  941739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem (1123 bytes)
	I1005 20:45:18.271518  941739 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem, removing ...
	I1005 20:45:18.271526  941739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem
	I1005 20:45:18.271548  941739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem (1679 bytes)
	I1005 20:45:18.271607  941739 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem org=jenkins.newest-cni-251602 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-251602]
	I1005 20:45:18.410529  941739 provision.go:172] copyRemoteCerts
	I1005 20:45:18.410591  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 20:45:18.410642  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:18.427655  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:18.525913  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 20:45:18.548522  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1005 20:45:18.571080  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1005 20:45:18.594270  941739 provision.go:86] duration metric: configureAuth took 339.997588ms
	I1005 20:45:18.594302  941739 ubuntu.go:193] setting minikube options for container-runtime
	I1005 20:45:18.594515  941739 config.go:182] Loaded profile config "newest-cni-251602": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1005 20:45:18.594580  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:18.611692  941739 main.go:141] libmachine: Using SSH client type: native
	I1005 20:45:18.612072  941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1005 20:45:18.612089  941739 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1005 20:45:18.745964  941739 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1005 20:45:18.745987  941739 ubuntu.go:71] root file system type: overlay
	I1005 20:45:18.746127  941739 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1005 20:45:18.746195  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:18.763221  941739 main.go:141] libmachine: Using SSH client type: native
	I1005 20:45:18.763676  941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1005 20:45:18.763773  941739 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1005 20:45:18.908747  941739 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1005 20:45:18.908833  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:18.927242  941739 main.go:141] libmachine: Using SSH client type: native
	I1005 20:45:18.927586  941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1005 20:45:18.927612  941739 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1005 20:45:19.070807  941739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 20:45:19.070845  941739 machine.go:91] provisioned docker machine in 4.134838843s
	I1005 20:45:19.070863  941739 start.go:300] post-start starting for "newest-cni-251602" (driver="docker")
	I1005 20:45:19.070880  941739 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 20:45:19.070965  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 20:45:19.071034  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:19.088361  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:19.186060  941739 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 20:45:19.189266  941739 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 20:45:19.189348  941739 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 20:45:19.189371  941739 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 20:45:19.189382  941739 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1005 20:45:19.189396  941739 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-491115/.minikube/addons for local assets ...
	I1005 20:45:19.189452  941739 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-491115/.minikube/files for local assets ...
	I1005 20:45:19.189539  941739 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem -> 4979262.pem in /etc/ssl/certs
	I1005 20:45:19.189654  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 20:45:19.198001  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem --> /etc/ssl/certs/4979262.pem (1708 bytes)
	I1005 20:45:19.219671  941739 start.go:303] post-start completed in 148.789062ms
	I1005 20:45:19.219760  941739 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 20:45:19.219819  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:19.237287  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:19.330407  941739 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 20:45:19.334776  941739 fix.go:56] fixHost completed within 4.748135457s
	I1005 20:45:19.334813  941739 start.go:83] releasing machines lock for "newest-cni-251602", held for 4.7482043s
	I1005 20:45:19.334891  941739 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-251602
	I1005 20:45:19.351556  941739 ssh_runner.go:195] Run: cat /version.json
	I1005 20:45:19.351608  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:19.351662  941739 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 20:45:19.351741  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:19.368619  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:19.369076  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:19.550177  941739 ssh_runner.go:195] Run: systemctl --version
	I1005 20:45:19.554696  941739 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 20:45:19.559119  941739 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1005 20:45:19.576904  941739 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1005 20:45:19.576985  941739 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 20:45:19.585375  941739 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1005 20:45:19.585410  941739 start.go:469] detecting cgroup driver to use...
	I1005 20:45:19.585444  941739 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 20:45:19.585560  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 20:45:19.600124  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1005 20:45:19.609154  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1005 20:45:19.618149  941739 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1005 20:45:19.618216  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1005 20:45:19.627522  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1005 20:45:19.636836  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1005 20:45:19.646086  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1005 20:45:19.655673  941739 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1005 20:45:19.664512  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1005 20:45:19.674505  941739 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1005 20:45:19.682683  941739 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1005 20:45:19.691073  941739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:45:19.769287  941739 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1005 20:45:19.852660  941739 start.go:469] detecting cgroup driver to use...
	I1005 20:45:19.852792  941739 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 20:45:19.852882  941739 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1005 20:45:19.864848  941739 cruntime.go:277] skipping containerd shutdown because we are bound to it
	I1005 20:45:19.864918  941739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1005 20:45:19.877630  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 20:45:19.895392  941739 ssh_runner.go:195] Run: which cri-dockerd
	I1005 20:45:19.899661  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1005 20:45:19.918552  941739 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1005 20:45:19.936911  941739 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1005 20:45:20.046865  941739 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1005 20:45:20.144163  941739 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1005 20:45:20.144299  941739 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1005 20:45:20.161707  941739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:45:20.251848  941739 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1005 20:45:20.520825  941739 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1005 20:45:20.605718  941739 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1005 20:45:20.688963  941739 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1005 20:45:20.773512  941739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:45:20.854013  941739 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1005 20:45:20.867324  941739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:45:20.946882  941739 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1005 20:45:21.017496  941739 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1005 20:45:21.017569  941739 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1005 20:45:21.021797  941739 start.go:537] Will wait 60s for crictl version
	I1005 20:45:21.021856  941739 ssh_runner.go:195] Run: which crictl
	I1005 20:45:21.025426  941739 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1005 20:45:21.070905  941739 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1005 20:45:21.070975  941739 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1005 20:45:21.094936  941739 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1005 20:45:21.121912  941739 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1005 20:45:21.121999  941739 cli_runner.go:164] Run: docker network inspect newest-cni-251602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 20:45:21.138556  941739 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1005 20:45:21.142440  941739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:45:21.154570  941739 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1005 20:45:21.157976  941739 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1005 20:45:21.158071  941739 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1005 20:45:21.178251  941739 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1005 20:45:21.178278  941739 docker.go:594] Images already preloaded, skipping extraction
	I1005 20:45:21.178347  941739 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1005 20:45:21.197723  941739 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1005 20:45:21.197759  941739 cache_images.go:84] Images are preloaded, skipping loading
	I1005 20:45:21.197823  941739 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1005 20:45:21.251580  941739 cni.go:84] Creating CNI manager for ""
	I1005 20:45:21.251616  941739 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1005 20:45:21.251639  941739 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1005 20:45:21.251658  941739 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-251602 NodeName:newest-cni-251602 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1005 20:45:21.251840  941739 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-251602"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1005 20:45:21.251930  941739 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-251602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:newest-cni-251602 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1005 20:45:21.251984  941739 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1005 20:45:21.260656  941739 binaries.go:44] Found k8s binaries, skipping transfer
	I1005 20:45:21.260726  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1005 20:45:21.269056  941739 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
	I1005 20:45:21.286089  941739 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1005 20:45:21.302730  941739 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1005 20:45:21.319579  941739 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1005 20:45:21.322925  941739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:45:21.333438  941739 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602 for IP: 192.168.67.2
	I1005 20:45:21.333472  941739 certs.go:190] acquiring lock for shared ca certs: {Name:mka6627fa5c31076c5fa233a6bbda946476bff5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:45:21.333619  941739 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17363-491115/.minikube/ca.key
	I1005 20:45:21.333654  941739 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.key
	I1005 20:45:21.333737  941739 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/client.key
	I1005 20:45:21.333791  941739 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/apiserver.key.c7fa3a9e
	I1005 20:45:21.333823  941739 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/proxy-client.key
	I1005 20:45:21.333912  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926.pem (1338 bytes)
	W1005 20:45:21.333938  941739 certs.go:433] ignoring /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926_empty.pem, impossibly tiny 0 bytes
	I1005 20:45:21.333949  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem (1671 bytes)
	I1005 20:45:21.333973  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem (1082 bytes)
	I1005 20:45:21.334008  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem (1123 bytes)
	I1005 20:45:21.334047  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem (1679 bytes)
	I1005 20:45:21.334102  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem (1708 bytes)
	I1005 20:45:21.334741  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1005 20:45:21.357132  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1005 20:45:21.379412  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1005 20:45:21.402402  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1005 20:45:21.425553  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1005 20:45:21.448572  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1005 20:45:21.470803  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1005 20:45:21.492671  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1005 20:45:21.514617  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926.pem --> /usr/share/ca-certificates/497926.pem (1338 bytes)
	I1005 20:45:21.537065  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem --> /usr/share/ca-certificates/4979262.pem (1708 bytes)
	I1005 20:45:21.559657  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1005 20:45:21.582144  941739 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1005 20:45:21.598672  941739 ssh_runner.go:195] Run: openssl version
	I1005 20:45:21.604061  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/497926.pem && ln -fs /usr/share/ca-certificates/497926.pem /etc/ssl/certs/497926.pem"
	I1005 20:45:21.613694  941739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/497926.pem
	I1005 20:45:21.617122  941739 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  5 20:07 /usr/share/ca-certificates/497926.pem
	I1005 20:45:21.617186  941739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/497926.pem
	I1005 20:45:21.623795  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/497926.pem /etc/ssl/certs/51391683.0"
	I1005 20:45:21.632192  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4979262.pem && ln -fs /usr/share/ca-certificates/4979262.pem /etc/ssl/certs/4979262.pem"
	I1005 20:45:21.641540  941739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4979262.pem
	I1005 20:45:21.644804  941739 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  5 20:07 /usr/share/ca-certificates/4979262.pem
	I1005 20:45:21.644853  941739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4979262.pem
	I1005 20:45:21.651399  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4979262.pem /etc/ssl/certs/3ec20f2e.0"
	I1005 20:45:21.659734  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1005 20:45:21.668779  941739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:45:21.672400  941739 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  5 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:45:21.672473  941739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:45:21.678971  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1005 20:45:21.688374  941739 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1005 20:45:21.691701  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1005 20:45:21.698446  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1005 20:45:21.704585  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1005 20:45:21.710930  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1005 20:45:21.717269  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1005 20:45:21.723706  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1005 20:45:21.730244  941739 kubeadm.go:404] StartCluster: {Name:newest-cni-251602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-251602 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:45:21.730390  941739 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1005 20:45:21.749238  941739 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1005 20:45:21.757704  941739 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1005 20:45:21.757777  941739 kubeadm.go:636] restartCluster start
	I1005 20:45:21.757833  941739 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1005 20:45:21.766002  941739 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:21.766568  941739 kubeconfig.go:135] verify returned: extract IP: "newest-cni-251602" does not appear in /home/jenkins/minikube-integration/17363-491115/kubeconfig
	I1005 20:45:21.766798  941739 kubeconfig.go:146] "newest-cni-251602" context is missing from /home/jenkins/minikube-integration/17363-491115/kubeconfig - will repair!
	I1005 20:45:21.767178  941739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/kubeconfig: {Name:mkd6618cb8d42fbccf8ec108c3891f3e690ff249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:45:21.768584  941739 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1005 20:45:21.777081  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:21.777142  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:21.786498  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:21.786517  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:21.786555  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:21.795849  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:22.296543  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:22.296643  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:22.307113  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:22.796806  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:22.796920  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:22.807658  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:23.296196  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:23.296307  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:23.307063  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:23.796660  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:23.796750  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:23.807326  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:24.296919  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:24.297003  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:24.307595  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:24.796497  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:24.796585  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:24.807169  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:25.296770  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:25.296888  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:25.307546  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:25.796061  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:25.796166  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:25.806783  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:26.296330  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:26.296433  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:26.307074  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:26.796470  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:26.796577  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:26.806786  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:27.296331  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:27.296415  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:27.306522  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:27.796815  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:27.796927  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:27.807056  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:28.296676  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:28.296772  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:28.307093  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:28.796685  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:28.796792  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:28.807035  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:29.296656  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:29.296766  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:29.306878  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:29.796676  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:29.796758  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:29.807141  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:30.296755  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:30.296850  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:30.306907  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:30.796266  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:30.796377  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:30.806636  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:31.296136  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:31.296248  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:31.306343  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:31.778147  941739 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1005 20:45:31.778197  941739 kubeadm.go:1128] stopping kube-system containers ...
	I1005 20:45:31.778276  941739 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1005 20:45:31.799139  941739 docker.go:463] Stopping containers: [edbeda11d2dc 9f2cb55357e2 f6bfaab5a6ac 67260f7c09c8 fd1feebd6b30 dd66b2b22702 01b6b78a55a3 b4382c5ea59f 12ffa1278374 5918e2e006de 5959b2ce7826 84e04d4b3dda 2fee3456f3f2 4f3db88655d7]
	I1005 20:45:31.799221  941739 ssh_runner.go:195] Run: docker stop edbeda11d2dc 9f2cb55357e2 f6bfaab5a6ac 67260f7c09c8 fd1feebd6b30 dd66b2b22702 01b6b78a55a3 b4382c5ea59f 12ffa1278374 5918e2e006de 5959b2ce7826 84e04d4b3dda 2fee3456f3f2 4f3db88655d7
	I1005 20:45:31.819269  941739 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1005 20:45:31.831589  941739 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1005 20:45:31.840562  941739 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Oct  5 20:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Oct  5 20:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Oct  5 20:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct  5 20:44 /etc/kubernetes/scheduler.conf
	
	I1005 20:45:31.840635  941739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1005 20:45:31.848959  941739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1005 20:45:31.857521  941739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1005 20:45:31.865912  941739 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:31.865992  941739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1005 20:45:31.874539  941739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1005 20:45:31.882971  941739 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:31.883036  941739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1005 20:45:31.891165  941739 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1005 20:45:31.899809  941739 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1005 20:45:31.899844  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:31.950458  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:32.439655  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:32.588235  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:32.644120  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:32.740951  941739 api_server.go:52] waiting for apiserver process to appear ...
	I1005 20:45:32.741029  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:45:32.753615  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:45:33.330126  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:45:33.829788  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:45:33.846461  941739 api_server.go:72] duration metric: took 1.105507442s to wait for apiserver process to appear ...
	I1005 20:45:33.846542  941739 api_server.go:88] waiting for apiserver healthz status ...
	I1005 20:45:33.846578  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:33.846977  941739 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1005 20:45:33.847055  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:33.847357  941739 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1005 20:45:34.348075  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:36.627973  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1005 20:45:36.628063  941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1005 20:45:36.628087  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:36.740856  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1005 20:45:36.740956  941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 20:45:36.848296  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:36.853601  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1005 20:45:36.853628  941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 20:45:37.348237  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:37.352593  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1005 20:45:37.352618  941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 20:45:37.847923  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:37.852873  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1005 20:45:37.852902  941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 20:45:38.348152  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:38.354442  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1005 20:45:38.363755  941739 api_server.go:141] control plane version: v1.28.2
	I1005 20:45:38.363785  941739 api_server.go:131] duration metric: took 4.517223524s to wait for apiserver health ...
	I1005 20:45:38.363796  941739 cni.go:84] Creating CNI manager for ""
	I1005 20:45:38.363807  941739 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1005 20:45:38.365566  941739 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1005 20:45:38.366945  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1005 20:45:38.375605  941739 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1005 20:45:38.418968  941739 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 20:45:38.430492  941739 system_pods.go:59] 8 kube-system pods found
	I1005 20:45:38.430531  941739 system_pods.go:61] "coredns-5dd5756b68-bm584" [0aa18475-85e2-44fd-b2f3-bea8e676ae2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:45:38.430541  941739 system_pods.go:61] "etcd-newest-cni-251602" [34417493-e814-4f29-b447-2863b3cfcf27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1005 20:45:38.430550  941739 system_pods.go:61] "kube-apiserver-newest-cni-251602" [ea09c35f-5b0a-4a6f-b11c-8a5516fbd861] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1005 20:45:38.430560  941739 system_pods.go:61] "kube-controller-manager-newest-cni-251602" [4636875d-c7bf-4080-a173-2ea829bdbbc3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1005 20:45:38.430571  941739 system_pods.go:61] "kube-proxy-vtq52" [c6349e67-7d8d-4cca-9b07-1eb70a41bb60] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:45:38.430603  941739 system_pods.go:61] "kube-scheduler-newest-cni-251602" [c3320fb3-4468-4f4c-ac6e-3d3aa7c4af0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1005 20:45:38.430617  941739 system_pods.go:61] "metrics-server-57f55c9bc5-75jt5" [6455e407-161e-4abe-94a4-8fb5968789b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1005 20:45:38.430631  941739 system_pods.go:61] "storage-provisioner" [50c99270-263a-466f-ae78-8da1c3fe7545] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:45:38.430641  941739 system_pods.go:74] duration metric: took 11.652857ms to wait for pod list to return data ...
	I1005 20:45:38.430649  941739 node_conditions.go:102] verifying NodePressure condition ...
	I1005 20:45:38.435489  941739 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1005 20:45:38.435522  941739 node_conditions.go:123] node cpu capacity is 8
	I1005 20:45:38.435538  941739 node_conditions.go:105] duration metric: took 4.879676ms to run NodePressure ...
	I1005 20:45:38.435565  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:38.709413  941739 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1005 20:45:38.718207  941739 ops.go:34] apiserver oom_adj: -16
	I1005 20:45:38.718235  941739 kubeadm.go:640] restartCluster took 16.960444278s
	I1005 20:45:38.718247  941739 kubeadm.go:406] StartCluster complete in 16.988017482s
	I1005 20:45:38.718274  941739 settings.go:142] acquiring lock: {Name:mk74c5e95d8c9fcaf06097e6d304129504752ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:45:38.718351  941739 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17363-491115/kubeconfig
	I1005 20:45:38.719220  941739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/kubeconfig: {Name:mkd6618cb8d42fbccf8ec108c3891f3e690ff249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:45:38.719473  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1005 20:45:38.719630  941739 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1005 20:45:38.719714  941739 config.go:182] Loaded profile config "newest-cni-251602": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1005 20:45:38.719720  941739 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-251602"
	I1005 20:45:38.719738  941739 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-251602"
	W1005 20:45:38.719746  941739 addons.go:240] addon storage-provisioner should already be in state true
	I1005 20:45:38.719745  941739 addons.go:69] Setting metrics-server=true in profile "newest-cni-251602"
	I1005 20:45:38.719747  941739 addons.go:69] Setting default-storageclass=true in profile "newest-cni-251602"
	I1005 20:45:38.719763  941739 addons.go:231] Setting addon metrics-server=true in "newest-cni-251602"
	W1005 20:45:38.719772  941739 addons.go:240] addon metrics-server should already be in state true
	I1005 20:45:38.719786  941739 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-251602"
	I1005 20:45:38.719799  941739 host.go:66] Checking if "newest-cni-251602" exists ...
	I1005 20:45:38.719813  941739 host.go:66] Checking if "newest-cni-251602" exists ...
	I1005 20:45:38.719800  941739 addons.go:69] Setting dashboard=true in profile "newest-cni-251602"
	I1005 20:45:38.719834  941739 addons.go:231] Setting addon dashboard=true in "newest-cni-251602"
	W1005 20:45:38.719843  941739 addons.go:240] addon dashboard should already be in state true
	I1005 20:45:38.719903  941739 host.go:66] Checking if "newest-cni-251602" exists ...
	I1005 20:45:38.720124  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:38.720279  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:38.720282  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:38.720344  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:38.723756  941739 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-251602" context rescaled to 1 replicas
	I1005 20:45:38.723804  941739 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1005 20:45:38.727049  941739 out.go:177] * Verifying Kubernetes components...
	I1005 20:45:38.728767  941739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:45:38.743950  941739 addons.go:231] Setting addon default-storageclass=true in "newest-cni-251602"
	W1005 20:45:38.744161  941739 addons.go:240] addon default-storageclass should already be in state true
	I1005 20:45:38.744212  941739 host.go:66] Checking if "newest-cni-251602" exists ...
	I1005 20:45:38.744748  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:38.761605  941739 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1005 20:45:38.762994  941739 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1005 20:45:38.764361  941739 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1005 20:45:38.762962  941739 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1005 20:45:38.766924  941739 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:45:38.765746  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1005 20:45:38.765763  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1005 20:45:38.768302  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:38.768331  941739 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 20:45:38.768349  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1005 20:45:38.768396  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:38.768481  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1005 20:45:38.768528  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:38.771651  941739 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1005 20:45:38.771678  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1005 20:45:38.771838  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:38.792082  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:38.798427  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:38.803117  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:38.806797  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:38.847194  941739 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1005 20:45:38.847278  941739 api_server.go:52] waiting for apiserver process to appear ...
	I1005 20:45:38.847344  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:45:38.926982  941739 api_server.go:72] duration metric: took 203.134329ms to wait for apiserver process to appear ...
	I1005 20:45:38.927013  941739 api_server.go:88] waiting for apiserver healthz status ...
	I1005 20:45:38.927033  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:38.931963  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1005 20:45:38.933196  941739 api_server.go:141] control plane version: v1.28.2
	I1005 20:45:38.933257  941739 api_server.go:131] duration metric: took 6.235518ms to wait for apiserver health ...
	I1005 20:45:38.933268  941739 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 20:45:38.938837  941739 system_pods.go:59] 8 kube-system pods found
	I1005 20:45:38.938869  941739 system_pods.go:61] "coredns-5dd5756b68-bm584" [0aa18475-85e2-44fd-b2f3-bea8e676ae2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:45:38.938882  941739 system_pods.go:61] "etcd-newest-cni-251602" [34417493-e814-4f29-b447-2863b3cfcf27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1005 20:45:38.938893  941739 system_pods.go:61] "kube-apiserver-newest-cni-251602" [ea09c35f-5b0a-4a6f-b11c-8a5516fbd861] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1005 20:45:38.938906  941739 system_pods.go:61] "kube-controller-manager-newest-cni-251602" [4636875d-c7bf-4080-a173-2ea829bdbbc3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1005 20:45:38.938913  941739 system_pods.go:61] "kube-proxy-vtq52" [c6349e67-7d8d-4cca-9b07-1eb70a41bb60] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:45:38.938919  941739 system_pods.go:61] "kube-scheduler-newest-cni-251602" [c3320fb3-4468-4f4c-ac6e-3d3aa7c4af0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1005 20:45:38.938932  941739 system_pods.go:61] "metrics-server-57f55c9bc5-75jt5" [6455e407-161e-4abe-94a4-8fb5968789b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1005 20:45:38.938943  941739 system_pods.go:61] "storage-provisioner" [50c99270-263a-466f-ae78-8da1c3fe7545] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:45:38.938955  941739 system_pods.go:74] duration metric: took 5.679606ms to wait for pod list to return data ...
	I1005 20:45:38.938967  941739 default_sa.go:34] waiting for default service account to be created ...
	I1005 20:45:38.941596  941739 default_sa.go:45] found service account: "default"
	I1005 20:45:38.941625  941739 default_sa.go:55] duration metric: took 2.647466ms for default service account to be created ...
	I1005 20:45:38.941638  941739 kubeadm.go:581] duration metric: took 217.801105ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1005 20:45:38.941657  941739 node_conditions.go:102] verifying NodePressure condition ...
	I1005 20:45:38.944359  941739 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1005 20:45:38.944385  941739 node_conditions.go:123] node cpu capacity is 8
	I1005 20:45:38.944399  941739 node_conditions.go:105] duration metric: took 2.735534ms to run NodePressure ...
	I1005 20:45:38.944414  941739 start.go:228] waiting for startup goroutines ...
	I1005 20:45:39.031121  941739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 20:45:39.031835  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1005 20:45:39.031864  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1005 20:45:39.037663  941739 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1005 20:45:39.037689  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1005 20:45:39.038028  941739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1005 20:45:39.052055  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1005 20:45:39.052084  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1005 20:45:39.122929  941739 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1005 20:45:39.122960  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1005 20:45:39.135708  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1005 20:45:39.135738  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1005 20:45:39.148797  941739 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1005 20:45:39.148828  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1005 20:45:39.233123  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1005 20:45:39.233156  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1005 20:45:39.246996  941739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1005 20:45:39.325634  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1005 20:45:39.325672  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1005 20:45:39.348115  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1005 20:45:39.348137  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1005 20:45:39.436685  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1005 20:45:39.436712  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1005 20:45:39.528259  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1005 20:45:39.528287  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1005 20:45:39.547672  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1005 20:45:39.547706  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1005 20:45:39.565975  941739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1005 20:45:40.443947  941739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.412776284s)
	I1005 20:45:40.444070  941739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.406004214s)
	I1005 20:45:40.571364  941739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.324316067s)
	I1005 20:45:40.571417  941739 addons.go:467] Verifying addon metrics-server=true in "newest-cni-251602"
	I1005 20:45:40.917851  941739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.351821533s)
	I1005 20:45:40.919845  941739 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-251602 addons enable metrics-server	
	
	
	I1005 20:45:40.921418  941739 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1005 20:45:40.922771  941739 addons.go:502] enable addons completed in 2.203154287s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1005 20:45:40.922805  941739 start.go:233] waiting for cluster config update ...
	I1005 20:45:40.922816  941739 start.go:242] writing updated cluster config ...
	I1005 20:45:40.923059  941739 ssh_runner.go:195] Run: rm -f paused
	I1005 20:45:40.970862  941739 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1005 20:45:40.972904  941739 out.go:177] * Done! kubectl is now configured to use "newest-cni-251602" cluster and "default" namespace by default
	I1005 20:46:27.657681  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:46:27.657726  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:46:27.657736  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:46:27.657741  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:46:27.657747  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:46:27.657753  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:46:27.657758  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:46:27.657766  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:46:27.660128  848852 out.go:177] 
	W1005 20:46:27.662010  848852 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns, kube-proxy
	W1005 20:46:27.662024  848852 out.go:239] * 
	W1005 20:46:27.662801  848852 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1005 20:46:27.665143  848852 out.go:177] 
	
	* 
	* ==> Docker <==
	* Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.741750210Z" level=info msg="Loading containers: start."
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.835828175Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.873800369Z" level=info msg="Loading containers: done."
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.883733188Z" level=info msg="Docker daemon" commit=1a79695 graphdriver=overlay2 version=24.0.6
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.883797436Z" level=info msg="Daemon has completed initialization"
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.908207956Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.908224826Z" level=info msg="API listen on [::]:2376"
	Oct 05 20:37:00 old-k8s-version-330869 systemd[1]: Started Docker Application Container Engine.
	Oct 05 20:37:04 old-k8s-version-330869 systemd[1]: Stopping Docker Application Container Engine...
	Oct 05 20:37:04 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:04.385384032Z" level=info msg="Processing signal 'terminated'"
	Oct 05 20:37:04 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:04.387134388Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 05 20:37:04 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:04.388068483Z" level=info msg="Daemon shutdown complete"
	Oct 05 20:37:04 old-k8s-version-330869 systemd[1]: docker.service: Deactivated successfully.
	Oct 05 20:37:04 old-k8s-version-330869 systemd[1]: Stopped Docker Application Container Engine.
	Oct 05 20:37:04 old-k8s-version-330869 systemd[1]: Starting Docker Application Container Engine...
	Oct 05 20:37:04 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:04.451611505Z" level=info msg="Starting up"
	Oct 05 20:37:04 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:04.461647092Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 05 20:37:06 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:06.744132135Z" level=info msg="Loading containers: start."
	Oct 05 20:37:06 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:06.839612411Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.001066181Z" level=info msg="Loading containers: done."
	Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.016241859Z" level=info msg="Docker daemon" commit=1a79695 graphdriver=overlay2 version=24.0.6
	Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.016300931Z" level=info msg="Daemon has completed initialization"
	Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.039742052Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.039779377Z" level=info msg="API listen on [::]:2376"
	Oct 05 20:37:07 old-k8s-version-330869 systemd[1]: Started Docker Application Container Engine.
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	134727f163f16       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   About a minute ago   Running             busybox                   0                   a4acd2c611f56       busybox
	56472dff5f81f       6e38f40d628db                                                                                         16 minutes ago       Running             storage-provisioner       0                   64075514dc163       storage-provisioner
	9f0be33584868       bf261d1579144                                                                                         16 minutes ago       Running             coredns                   0                   2e7135e437f0c       coredns-5644d7b6d9-k2f47
	cef84f5b51c49       c21b0c7400f98                                                                                         16 minutes ago       Running             kube-proxy                0                   a228f4c03cdba       kube-proxy-n9cwb
	530e42b9f6c77       b2756210eeabf                                                                                         17 minutes ago       Running             etcd                      0                   7ab4e42c79a68       etcd-old-k8s-version-330869
	6c66019a6e010       06a629a7e51cd                                                                                         17 minutes ago       Running             kube-controller-manager   0                   e87f561b15eaf       kube-controller-manager-old-k8s-version-330869
	a576da8318f84       301ddc62b80b1                                                                                         17 minutes ago       Running             kube-scheduler            0                   84631805dc0e9       kube-scheduler-old-k8s-version-330869
	91420fd2d357f       b305571ca60a5                                                                                         17 minutes ago       Running             kube-apiserver            0                   a2a65ce6717dd       kube-apiserver-old-k8s-version-330869
	
	* 
	* ==> coredns [9f0be3358486] <==
	* .:53
	2023-10-05T20:37:40.506Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-10-05T20:37:40.507Z [INFO] CoreDNS-1.6.2
	2023-10-05T20:37:40.507Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-330869
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-330869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53
	                    minikube.k8s.io/name=old-k8s-version-330869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_05T20_37_23_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Oct 2023 20:37:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Oct 2023 20:54:21 +0000   Thu, 05 Oct 2023 20:37:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Oct 2023 20:54:21 +0000   Thu, 05 Oct 2023 20:37:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Oct 2023 20:54:21 +0000   Thu, 05 Oct 2023 20:37:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Oct 2023 20:54:21 +0000   Thu, 05 Oct 2023 20:52:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-330869
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304681132Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32859420Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304681132Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32859420Ki
	 pods:               110
	System Info:
	 Machine ID:                 da3d4e78336e4de3801cc5f1121e363a
	 System UUID:                fb98631f-d977-49f6-8d13-47582452d2b5
	 Boot ID:                    1c650140-d8f3-4a50-ac83-e0e6baf94598
	 Kernel Version:             5.15.0-1044-gcp
	 OS Image:                   Ubuntu 22.04.3 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	  kube-system                coredns-5644d7b6d9-k2f47                          100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     16m
	  kube-system                etcd-old-k8s-version-330869                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                kube-apiserver-old-k8s-version-330869             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                kube-controller-manager-old-k8s-version-330869    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                kube-proxy-n9cwb                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                kube-scheduler-old-k8s-version-330869             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet, old-k8s-version-330869     Node old-k8s-version-330869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet, old-k8s-version-330869     Node old-k8s-version-330869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet, old-k8s-version-330869     Node old-k8s-version-330869 status is now: NodeHasSufficientPID
	  Normal  Starting                 16m                kube-proxy, old-k8s-version-330869  Starting kube-proxy.
	  Normal  NodeReady                109s               kubelet, old-k8s-version-330869     Node old-k8s-version-330869 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e ee c2 a6 29 ac 08 06
	[Oct 5 20:39] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev bridge
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 fb 5f c9 9e d7 08 06
	[  +0.715332] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e 50 38 95 e7 63 08 06
	[  +8.065920] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 69 10 43 1f 0b 08 06
	[ +16.180606] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 6a 13 59 d9 da 08 06
	[Oct 5 20:43] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff ae 52 50 a9 6f 53 08 06
	[Oct 5 20:44] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 46 f6 d1 58 d2 08 06
	[ +19.224580] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e 9e 9c 80 d0 43 08 06
	[  +8.732079] IPv4: martian source 10.244.0.1 from 10.244.0.9, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 96 3b 8d 2f b2 6f 08 06
	[  +1.563207] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 1a 9a 54 1a fc 08 06
	[  +5.814222] IPv4: martian source 10.244.0.1 from 10.244.0.9, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea e4 45 a0 bd b2 08 06
	[Oct 5 20:45] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f2 32 f7 4c 9e 13 08 06
	[ +35.890083] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba 7a 81 76 84 0a 08 06
	
	* 
	* ==> etcd [530e42b9f6c7] <==
	* 2023-10-05 20:37:13.543597 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-10-05 20:37:13.544562 I | etcdserver: 9f0758e1c58a86ed as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-05 20:37:13.544945 I | etcdserver/membership: added member 9f0758e1c58a86ed [https://192.168.85.2:2380] to cluster 68eaea490fab4e05
	2023-10-05 20:37:13.546416 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-05 20:37:13.546557 I | embed: listening for metrics on http://192.168.85.2:2381
	2023-10-05 20:37:13.546670 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-05 20:37:14.535543 I | raft: 9f0758e1c58a86ed is starting a new election at term 1
	2023-10-05 20:37:14.535587 I | raft: 9f0758e1c58a86ed became candidate at term 2
	2023-10-05 20:37:14.535619 I | raft: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	2023-10-05 20:37:14.535634 I | raft: 9f0758e1c58a86ed became leader at term 2
	2023-10-05 20:37:14.535644 I | raft: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2023-10-05 20:37:14.535840 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-05 20:37:14.536888 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-05 20:37:14.536932 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-05 20:37:14.536948 I | embed: ready to serve client requests
	2023-10-05 20:37:14.536984 I | etcdserver: published {Name:old-k8s-version-330869 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2023-10-05 20:37:14.537012 I | embed: ready to serve client requests
	2023-10-05 20:37:14.538573 I | embed: serving client requests on 192.168.85.2:2379
	2023-10-05 20:37:14.538614 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-05 20:37:52.212401 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-k2f47\" " with result "range_response_count:1 size:1693" took too long (122.897349ms) to execute
	2023-10-05 20:37:52.212479 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:215" took too long (125.865716ms) to execute
	2023-10-05 20:47:14.658642 I | mvcc: store.index: compact 592
	2023-10-05 20:47:14.659659 I | mvcc: finished scheduled compaction at 592 (took 692.439µs)
	2023-10-05 20:52:14.662212 I | mvcc: store.index: compact 837
	2023-10-05 20:52:14.663105 I | mvcc: finished scheduled compaction at 837 (took 586.403µs)
	
	* 
	* ==> kernel <==
	*  20:54:31 up  2:36,  0 users,  load average: 0.24, 0.60, 1.69
	Linux old-k8s-version-330869 5.15.0-1044-gcp #52~20.04.1-Ubuntu SMP Wed Sep 20 16:25:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [91420fd2d357] <==
	* I1005 20:37:18.648312       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
	E1005 20:37:18.650296       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.85.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1005 20:37:18.651420       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
	I1005 20:37:18.651503       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1005 20:37:18.747656       1 cache.go:39] Caches are synced for autoregister controller
	I1005 20:37:18.749173       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I1005 20:37:18.762082       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1005 20:37:18.762123       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1005 20:37:18.844252       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1005 20:37:19.647651       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I1005 20:37:19.647684       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1005 20:37:19.647699       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1005 20:37:19.651492       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I1005 20:37:19.654366       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I1005 20:37:19.654390       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1005 20:37:21.429015       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1005 20:37:21.708924       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1005 20:37:22.050878       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1005 20:37:22.051722       1 controller.go:606] quota admission added evaluator for: endpoints
	I1005 20:37:22.934960       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1005 20:37:23.322631       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1005 20:37:23.671803       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1005 20:37:38.427328       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1005 20:37:38.453764       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1005 20:37:38.539579       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [6c66019a6e01] <==
	* I1005 20:37:38.482277       1 shared_informer.go:204] Caches are synced for stateful set 
	I1005 20:37:38.487462       1 shared_informer.go:204] Caches are synced for ReplicationController 
	I1005 20:37:38.487721       1 shared_informer.go:204] Caches are synced for GC 
	I1005 20:37:38.487734       1 shared_informer.go:204] Caches are synced for PVC protection 
	I1005 20:37:38.487705       1 shared_informer.go:204] Caches are synced for attach detach 
	I1005 20:37:38.512949       1 shared_informer.go:204] Caches are synced for ReplicaSet 
	I1005 20:37:38.537588       1 shared_informer.go:204] Caches are synced for deployment 
	I1005 20:37:38.537985       1 shared_informer.go:204] Caches are synced for resource quota 
	I1005 20:37:38.542695       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"59e37402-092c-492a-8a24-0e86b565f6d7", APIVersion:"apps/v1", ResourceVersion:"192", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2
	I1005 20:37:38.550402       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"09eba7cb-b0a0-47ac-8889-0bda35b1e552", APIVersion:"apps/v1", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-k2f47
	I1005 20:37:38.553758       1 shared_informer.go:204] Caches are synced for expand 
	I1005 20:37:38.559594       1 shared_informer.go:204] Caches are synced for resource quota 
	I1005 20:37:38.560551       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"09eba7cb-b0a0-47ac-8889-0bda35b1e552", APIVersion:"apps/v1", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-wmjhd
	I1005 20:37:38.588856       1 shared_informer.go:204] Caches are synced for disruption 
	I1005 20:37:38.588883       1 disruption.go:341] Sending events to api server.
	I1005 20:37:38.605013       1 shared_informer.go:204] Caches are synced for persistent volume 
	I1005 20:37:38.646306       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1005 20:37:38.681988       1 shared_informer.go:204] Caches are synced for service account 
	I1005 20:37:38.683218       1 shared_informer.go:204] Caches are synced for namespace 
	I1005 20:37:38.686440       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1005 20:37:38.686460       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1005 20:37:38.941357       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"59e37402-092c-492a-8a24-0e86b565f6d7", APIVersion:"apps/v1", ResourceVersion:"341", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-5644d7b6d9 to 1
	I1005 20:37:38.999527       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"09eba7cb-b0a0-47ac-8889-0bda35b1e552", APIVersion:"apps/v1", ResourceVersion:"342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-5644d7b6d9-wmjhd
	I1005 20:40:53.441120       1 node_lifecycle_controller.go:1058] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1005 20:52:43.475150       1 node_lifecycle_controller.go:1085] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [cef84f5b51c4] <==
	* W1005 20:37:40.040149       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1005 20:37:40.052818       1 node.go:135] Successfully retrieved node IP: 192.168.85.2
	I1005 20:37:40.052863       1 server_others.go:149] Using iptables Proxier.
	I1005 20:37:40.053425       1 server.go:529] Version: v1.16.0
	I1005 20:37:40.053948       1 config.go:131] Starting endpoints config controller
	I1005 20:37:40.053980       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1005 20:37:40.054066       1 config.go:313] Starting service config controller
	I1005 20:37:40.054081       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1005 20:37:40.157367       1 shared_informer.go:204] Caches are synced for service config 
	I1005 20:37:40.224240       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [a576da8318f8] <==
	* I1005 20:37:18.743716       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1005 20:37:18.744158       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1005 20:37:18.842482       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1005 20:37:18.843126       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 20:37:18.843177       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 20:37:18.843316       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1005 20:37:18.843330       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1005 20:37:18.843386       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1005 20:37:18.843440       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1005 20:37:18.843521       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1005 20:37:18.843846       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1005 20:37:18.844748       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1005 20:37:18.927555       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1005 20:37:19.843756       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1005 20:37:19.844669       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 20:37:19.845775       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 20:37:19.846714       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1005 20:37:19.850359       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1005 20:37:19.919069       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1005 20:37:19.919980       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1005 20:37:19.927483       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1005 20:37:19.928788       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1005 20:37:19.929881       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1005 20:37:19.931691       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1005 20:46:29.532005       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* Oct 05 20:51:21 old-k8s-version-330869 kubelet[2004]: I1005 20:51:21.468505    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 13m41.650972067s ago; threshold is 3m0s
	Oct 05 20:51:26 old-k8s-version-330869 kubelet[2004]: I1005 20:51:26.468759    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 13m46.651231708s ago; threshold is 3m0s
	Oct 05 20:51:31 old-k8s-version-330869 kubelet[2004]: I1005 20:51:31.468983    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 13m51.651446046s ago; threshold is 3m0s
	Oct 05 20:51:36 old-k8s-version-330869 kubelet[2004]: I1005 20:51:36.469271    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 13m56.651735191s ago; threshold is 3m0s
	Oct 05 20:51:41 old-k8s-version-330869 kubelet[2004]: I1005 20:51:41.469526    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m1.651995326s ago; threshold is 3m0s
	Oct 05 20:51:46 old-k8s-version-330869 kubelet[2004]: I1005 20:51:46.469737    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m6.652207391s ago; threshold is 3m0s
	Oct 05 20:51:51 old-k8s-version-330869 kubelet[2004]: I1005 20:51:51.469958    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m11.652428101s ago; threshold is 3m0s
	Oct 05 20:51:56 old-k8s-version-330869 kubelet[2004]: I1005 20:51:56.470209    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m16.652673957s ago; threshold is 3m0s
	Oct 05 20:52:01 old-k8s-version-330869 kubelet[2004]: I1005 20:52:01.470458    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m21.652921421s ago; threshold is 3m0s
	Oct 05 20:52:06 old-k8s-version-330869 kubelet[2004]: I1005 20:52:06.470666    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m26.65313406s ago; threshold is 3m0s
	Oct 05 20:52:11 old-k8s-version-330869 kubelet[2004]: I1005 20:52:11.470891    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m31.653359263s ago; threshold is 3m0s
	Oct 05 20:52:16 old-k8s-version-330869 kubelet[2004]: I1005 20:52:16.471119    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m36.653584922s ago; threshold is 3m0s
	Oct 05 20:52:21 old-k8s-version-330869 kubelet[2004]: I1005 20:52:21.471359    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m41.653827014s ago; threshold is 3m0s
	Oct 05 20:52:26 old-k8s-version-330869 kubelet[2004]: I1005 20:52:26.471596    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m46.654063899s ago; threshold is 3m0s
	Oct 05 20:52:31 old-k8s-version-330869 kubelet[2004]: I1005 20:52:31.471791    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m51.654265006s ago; threshold is 3m0s
	Oct 05 20:52:36 old-k8s-version-330869 kubelet[2004]: I1005 20:52:36.472027    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m56.654488765s ago; threshold is 3m0s
	Oct 05 20:52:39 old-k8s-version-330869 kubelet[2004]: E1005 20:52:39.019435    2004 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-5644d7b6d9-wmjhd": operation timeout: context deadline exceeded
	Oct 05 20:52:39 old-k8s-version-330869 kubelet[2004]: E1005 20:52:39.019533    2004 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-5644d7b6d9-wmjhd_kube-system(9cbed5dd-4684-4f3c-93d3-75465aeebcdc)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-5644d7b6d9-wmjhd": operation timeout: context deadline exceeded
	Oct 05 20:52:39 old-k8s-version-330869 kubelet[2004]: E1005 20:52:39.019547    2004 kuberuntime_manager.go:710] createPodSandbox for pod "coredns-5644d7b6d9-wmjhd_kube-system(9cbed5dd-4684-4f3c-93d3-75465aeebcdc)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-5644d7b6d9-wmjhd": operation timeout: context deadline exceeded
	Oct 05 20:52:39 old-k8s-version-330869 kubelet[2004]: E1005 20:52:39.019658    2004 pod_workers.go:191] Error syncing pod 9cbed5dd-4684-4f3c-93d3-75465aeebcdc ("coredns-5644d7b6d9-wmjhd_kube-system(9cbed5dd-4684-4f3c-93d3-75465aeebcdc)"), skipping: failed to "CreatePodSandbox" for "coredns-5644d7b6d9-wmjhd_kube-system(9cbed5dd-4684-4f3c-93d3-75465aeebcdc)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-5644d7b6d9-wmjhd_kube-system(9cbed5dd-4684-4f3c-93d3-75465aeebcdc)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"coredns-5644d7b6d9-wmjhd\": operation timeout: context deadline exceeded"
	Oct 05 20:52:40 old-k8s-version-330869 kubelet[2004]: E1005 20:52:40.128896    2004 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "ce73ce4ed74195a85e09711fb49bcd0f0491f2710055da732259ae9b224a6ed6" for pod "coredns-5644d7b6d9-wmjhd_kube-system(9cbed5dd-4684-4f3c-93d3-75465aeebcdc)" error: rpc error: code = DeadlineExceeded desc = context deadline exceeded
	Oct 05 20:52:42 old-k8s-version-330869 kubelet[2004]: I1005 20:52:42.992874    2004 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-r58hx" (UniqueName: "kubernetes.io/secret/f8bda2b8-a4ff-4a22-bcdd-86323959b312-default-token-r58hx") pod "busybox" (UID: "f8bda2b8-a4ff-4a22-bcdd-86323959b312")
	Oct 05 20:52:43 old-k8s-version-330869 kubelet[2004]: W1005 20:52:43.430842    2004 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
	Oct 05 20:53:12 old-k8s-version-330869 kubelet[2004]: E1005 20:53:12.166992    2004 remote_runtime.go:128] StopPodSandbox "ce73ce4ed74195a85e09711fb49bcd0f0491f2710055da732259ae9b224a6ed6" from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
	Oct 05 20:53:12 old-k8s-version-330869 kubelet[2004]: E1005 20:53:12.167043    2004 kuberuntime_gc.go:170] Failed to stop sandbox "ce73ce4ed74195a85e09711fb49bcd0f0491f2710055da732259ae9b224a6ed6" before removing: rpc error: code = DeadlineExceeded desc = context deadline exceeded
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-330869 -n old-k8s-version-330869
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-330869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-5644d7b6d9-k2f47 storage-provisioner
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-330869 describe pod busybox coredns-5644d7b6d9-k2f47 storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-330869 describe pod busybox coredns-5644d7b6d9-k2f47 storage-provisioner: exit status 1 (67.059674ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             old-k8s-version-330869/192.168.85.2
	Start Time:       Thu, 05 Oct 2023 20:52:42 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r58hx (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  default-token-r58hx:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-r58hx
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  8m2s                  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
	  Warning  FailedScheduling  6m42s (x1 over 8m2s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
	  Normal   Scheduled         108s                  default-scheduler  Successfully assigned default/busybox to old-k8s-version-330869
	  Normal   Pulling           108s                  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal   Pulled            107s                  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal   Created           107s                  kubelet            Created container busybox
	  Normal   Started           107s                  kubelet            Started container busybox

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-k2f47" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-330869 describe pod busybox coredns-5644d7b6d9-k2f47 storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-330869
helpers_test.go:235: (dbg) docker inspect old-k8s-version-330869:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9",
	        "Created": "2023-10-05T20:36:53.706444621Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 850463,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-05T20:36:54.08354438Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:94671ba3754e2c6976414eaf20a0c7861a5d2f9fc631e1161e8ab0ded9062c52",
	        "ResolvConfPath": "/var/lib/docker/containers/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9/hostname",
	        "HostsPath": "/var/lib/docker/containers/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9/hosts",
	        "LogPath": "/var/lib/docker/containers/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9/0ffb18ccd18d3a519fe433681e826678add5b991e61a9bda0dbfb9ea0dee4ec9-json.log",
	        "Name": "/old-k8s-version-330869",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-330869:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-330869",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0d686d9896e444b3737621f3bb0399b6dd3266180ac56b4497ffd2325577edca-init/diff:/var/lib/docker/overlay2/e65b3f74dc6bfb6767eea300df98bf2be99245c1b234ea43800cf021cd81177d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d686d9896e444b3737621f3bb0399b6dd3266180ac56b4497ffd2325577edca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d686d9896e444b3737621f3bb0399b6dd3266180ac56b4497ffd2325577edca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d686d9896e444b3737621f3bb0399b6dd3266180ac56b4497ffd2325577edca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-330869",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-330869/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-330869",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-330869",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-330869",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e9d5900763ffac860582f91e1cc24789bad5009ed40771fbeb5d999159eee780",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33383"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33382"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33379"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33381"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33380"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e9d5900763ff",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-330869": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0ffb18ccd18d",
	                        "old-k8s-version-330869"
	                    ],
	                    "NetworkID": "b2ec8c9cc8a493d14667efb735586eda5a96dcf492505b426d598dbb05a7c972",
	                    "EndpointID": "75ea406fffba6499772cde5d775de2d6bc83b43060d83c230ed678cdae12bc5e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330869 -n old-k8s-version-330869
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-330869 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p embed-certs-411409 sudo                             | embed-certs-411409           | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-411409                                  | embed-certs-411409           | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-411409                                  | embed-certs-411409           | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-411409                                  | embed-certs-411409           | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	| delete  | -p embed-certs-411409                                  | embed-certs-411409           | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	| start   | -p newest-cni-251602 --memory=2200 --alsologtostderr   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:45 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=docker            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p                                                     | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | default-k8s-diff-port-973002                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | default-k8s-diff-port-973002                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | default-k8s-diff-port-973002                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | default-k8s-diff-port-973002                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-973002 | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | default-k8s-diff-port-973002                           |                              |         |         |                     |                     |
	| ssh     | -p no-preload-477708 sudo                              | no-preload-477708            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p no-preload-477708                                   | no-preload-477708            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-477708                                   | no-preload-477708            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-477708                                   | no-preload-477708            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	| delete  | -p no-preload-477708                                   | no-preload-477708            | jenkins | v1.31.2 | 05 Oct 23 20:44 UTC | 05 Oct 23 20:44 UTC |
	| addons  | enable metrics-server -p newest-cni-251602             | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-251602                                   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-251602                  | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-251602 --memory=2200 --alsologtostderr   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=docker            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-251602 sudo                              | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-251602                                   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-251602                                   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-251602                                   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	| delete  | -p newest-cni-251602                                   | newest-cni-251602            | jenkins | v1.31.2 | 05 Oct 23 20:45 UTC | 05 Oct 23 20:45 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 20:45:14
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 20:45:14.405012  941739 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:45:14.405318  941739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:45:14.405332  941739 out.go:309] Setting ErrFile to fd 2...
	I1005 20:45:14.405338  941739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:45:14.405563  941739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-491115/.minikube/bin
	I1005 20:45:14.406125  941739 out.go:303] Setting JSON to false
	I1005 20:45:14.408036  941739 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8863,"bootTime":1696529852,"procs":691,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:45:14.408112  941739 start.go:138] virtualization: kvm guest
	I1005 20:45:14.411041  941739 out.go:177] * [newest-cni-251602] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:45:14.412825  941739 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:45:14.414496  941739 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:45:14.412885  941739 notify.go:220] Checking for updates...
	I1005 20:45:14.417488  941739 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig
	I1005 20:45:14.419444  941739 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube
	I1005 20:45:14.420812  941739 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:45:14.422387  941739 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:45:14.424417  941739 config.go:182] Loaded profile config "newest-cni-251602": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1005 20:45:14.424920  941739 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:45:14.447137  941739 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:45:14.447233  941739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:45:14.502313  941739 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-05 20:45:14.492746667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:45:14.502465  941739 docker.go:294] overlay module found
	I1005 20:45:14.504743  941739 out.go:177] * Using the docker driver based on existing profile
	I1005 20:45:14.506376  941739 start.go:298] selected driver: docker
	I1005 20:45:14.506399  941739 start.go:902] validating driver "docker" against &{Name:newest-cni-251602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-251602 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:45:14.506507  941739 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:45:14.507273  941739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:45:14.559655  941739 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-05 20:45:14.550952004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:45:14.560012  941739 start_flags.go:942] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1005 20:45:14.560046  941739 cni.go:84] Creating CNI manager for ""
	I1005 20:45:14.560066  941739 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1005 20:45:14.560079  941739 start_flags.go:321] config:
	{Name:newest-cni-251602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-251602 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:45:14.562326  941739 out.go:177] * Starting control plane node newest-cni-251602 in cluster newest-cni-251602
	I1005 20:45:14.565495  941739 cache.go:122] Beginning downloading kic base image for docker with docker
	I1005 20:45:14.567000  941739 out.go:177] * Pulling base image ...
	I1005 20:45:14.568566  941739 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1005 20:45:14.568620  941739 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1005 20:45:14.568631  941739 cache.go:57] Caching tarball of preloaded images
	I1005 20:45:14.568707  941739 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 20:45:14.568717  941739 preload.go:174] Found /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1005 20:45:14.568791  941739 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1005 20:45:14.568916  941739 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/config.json ...
	I1005 20:45:14.586420  941739 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1005 20:45:14.586452  941739 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1005 20:45:14.586477  941739 cache.go:195] Successfully downloaded all kic artifacts
	I1005 20:45:14.586522  941739 start.go:365] acquiring machines lock for newest-cni-251602: {Name:mkefe4baf7b8136c10dd9c20a98860ec3c495766 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:45:14.586596  941739 start.go:369] acquired machines lock for "newest-cni-251602" in 47.72µs
	I1005 20:45:14.586622  941739 start.go:96] Skipping create...Using existing machine configuration
	I1005 20:45:14.586642  941739 fix.go:54] fixHost starting: 
	I1005 20:45:14.587273  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:14.605317  941739 fix.go:102] recreateIfNeeded on newest-cni-251602: state=Stopped err=<nil>
	W1005 20:45:14.605354  941739 fix.go:128] unexpected machine state, will restart: <nil>
	I1005 20:45:14.607609  941739 out.go:177] * Restarting existing docker container for "newest-cni-251602" ...
	I1005 20:45:15.417486  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:45:15.417531  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:45:15.417542  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:45:15.417550  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:45:15.417558  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:45:15.417565  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:45:15.417579  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:45:15.417589  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:45:15.417621  848852 retry.go:31] will retry after 1m12.232820849s: missing components: kube-dns, kube-proxy
	I1005 20:45:14.609066  941739 cli_runner.go:164] Run: docker start newest-cni-251602
	I1005 20:45:14.897686  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:14.916217  941739 kic.go:426] container "newest-cni-251602" state is running.
	I1005 20:45:14.916594  941739 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-251602
	I1005 20:45:14.935722  941739 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/config.json ...
	I1005 20:45:14.935987  941739 machine.go:88] provisioning docker machine ...
	I1005 20:45:14.936015  941739 ubuntu.go:169] provisioning hostname "newest-cni-251602"
	I1005 20:45:14.936080  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:14.954269  941739 main.go:141] libmachine: Using SSH client type: native
	I1005 20:45:14.954655  941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1005 20:45:14.954675  941739 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-251602 && echo "newest-cni-251602" | sudo tee /etc/hostname
	I1005 20:45:14.955367  941739 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60694->127.0.0.1:33423: read: connection reset by peer
	I1005 20:45:18.101383  941739 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-251602
	
	I1005 20:45:18.101493  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:18.118632  941739 main.go:141] libmachine: Using SSH client type: native
	I1005 20:45:18.118970  941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1005 20:45:18.118988  941739 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-251602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-251602/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-251602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 20:45:18.254181  941739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 20:45:18.254212  941739 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-491115/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-491115/.minikube}
	I1005 20:45:18.254247  941739 ubuntu.go:177] setting up certificates
	I1005 20:45:18.254259  941739 provision.go:83] configureAuth start
	I1005 20:45:18.254314  941739 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-251602
	I1005 20:45:18.271133  941739 provision.go:138] copyHostCerts
	I1005 20:45:18.271209  941739 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem, removing ...
	I1005 20:45:18.271225  941739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem
	I1005 20:45:18.271301  941739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/ca.pem (1082 bytes)
	I1005 20:45:18.271415  941739 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem, removing ...
	I1005 20:45:18.271430  941739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem
	I1005 20:45:18.271455  941739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/cert.pem (1123 bytes)
	I1005 20:45:18.271518  941739 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem, removing ...
	I1005 20:45:18.271526  941739 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem
	I1005 20:45:18.271548  941739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-491115/.minikube/key.pem (1679 bytes)
	I1005 20:45:18.271607  941739 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem org=jenkins.newest-cni-251602 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-251602]
	I1005 20:45:18.410529  941739 provision.go:172] copyRemoteCerts
	I1005 20:45:18.410591  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 20:45:18.410642  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:18.427655  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:18.525913  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 20:45:18.548522  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1005 20:45:18.571080  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1005 20:45:18.594270  941739 provision.go:86] duration metric: configureAuth took 339.997588ms
	I1005 20:45:18.594302  941739 ubuntu.go:193] setting minikube options for container-runtime
	I1005 20:45:18.594515  941739 config.go:182] Loaded profile config "newest-cni-251602": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1005 20:45:18.594580  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:18.611692  941739 main.go:141] libmachine: Using SSH client type: native
	I1005 20:45:18.612072  941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1005 20:45:18.612089  941739 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1005 20:45:18.745964  941739 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1005 20:45:18.745987  941739 ubuntu.go:71] root file system type: overlay
	I1005 20:45:18.746127  941739 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1005 20:45:18.746195  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:18.763221  941739 main.go:141] libmachine: Using SSH client type: native
	I1005 20:45:18.763676  941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1005 20:45:18.763773  941739 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1005 20:45:18.908747  941739 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1005 20:45:18.908833  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:18.927242  941739 main.go:141] libmachine: Using SSH client type: native
	I1005 20:45:18.927586  941739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1005 20:45:18.927612  941739 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1005 20:45:19.070807  941739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 20:45:19.070845  941739 machine.go:91] provisioned docker machine in 4.134838843s
	I1005 20:45:19.070863  941739 start.go:300] post-start starting for "newest-cni-251602" (driver="docker")
	I1005 20:45:19.070880  941739 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 20:45:19.070965  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 20:45:19.071034  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:19.088361  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:19.186060  941739 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 20:45:19.189266  941739 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 20:45:19.189348  941739 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 20:45:19.189371  941739 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 20:45:19.189382  941739 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1005 20:45:19.189396  941739 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-491115/.minikube/addons for local assets ...
	I1005 20:45:19.189452  941739 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-491115/.minikube/files for local assets ...
	I1005 20:45:19.189539  941739 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem -> 4979262.pem in /etc/ssl/certs
	I1005 20:45:19.189654  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 20:45:19.198001  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem --> /etc/ssl/certs/4979262.pem (1708 bytes)
	I1005 20:45:19.219671  941739 start.go:303] post-start completed in 148.789062ms
	I1005 20:45:19.219760  941739 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 20:45:19.219819  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:19.237287  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:19.330407  941739 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 20:45:19.334776  941739 fix.go:56] fixHost completed within 4.748135457s
	I1005 20:45:19.334813  941739 start.go:83] releasing machines lock for "newest-cni-251602", held for 4.7482043s
	I1005 20:45:19.334891  941739 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-251602
	I1005 20:45:19.351556  941739 ssh_runner.go:195] Run: cat /version.json
	I1005 20:45:19.351608  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:19.351662  941739 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 20:45:19.351741  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:19.368619  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:19.369076  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:19.550177  941739 ssh_runner.go:195] Run: systemctl --version
	I1005 20:45:19.554696  941739 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 20:45:19.559119  941739 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1005 20:45:19.576904  941739 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1005 20:45:19.576985  941739 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 20:45:19.585375  941739 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1005 20:45:19.585410  941739 start.go:469] detecting cgroup driver to use...
	I1005 20:45:19.585444  941739 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 20:45:19.585560  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 20:45:19.600124  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1005 20:45:19.609154  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1005 20:45:19.618149  941739 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1005 20:45:19.618216  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1005 20:45:19.627522  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1005 20:45:19.636836  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1005 20:45:19.646086  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1005 20:45:19.655673  941739 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1005 20:45:19.664512  941739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1005 20:45:19.674505  941739 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1005 20:45:19.682683  941739 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1005 20:45:19.691073  941739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:45:19.769287  941739 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1005 20:45:19.852660  941739 start.go:469] detecting cgroup driver to use...
	I1005 20:45:19.852792  941739 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 20:45:19.852882  941739 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1005 20:45:19.864848  941739 cruntime.go:277] skipping containerd shutdown because we are bound to it
	I1005 20:45:19.864918  941739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1005 20:45:19.877630  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 20:45:19.895392  941739 ssh_runner.go:195] Run: which cri-dockerd
	I1005 20:45:19.899661  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1005 20:45:19.918552  941739 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1005 20:45:19.936911  941739 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1005 20:45:20.046865  941739 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1005 20:45:20.144163  941739 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1005 20:45:20.144299  941739 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1005 20:45:20.161707  941739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:45:20.251848  941739 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1005 20:45:20.520825  941739 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1005 20:45:20.605718  941739 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1005 20:45:20.688963  941739 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1005 20:45:20.773512  941739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:45:20.854013  941739 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1005 20:45:20.867324  941739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:45:20.946882  941739 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1005 20:45:21.017496  941739 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1005 20:45:21.017569  941739 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1005 20:45:21.021797  941739 start.go:537] Will wait 60s for crictl version
	I1005 20:45:21.021856  941739 ssh_runner.go:195] Run: which crictl
	I1005 20:45:21.025426  941739 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1005 20:45:21.070905  941739 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1005 20:45:21.070975  941739 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1005 20:45:21.094936  941739 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1005 20:45:21.121912  941739 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1005 20:45:21.121999  941739 cli_runner.go:164] Run: docker network inspect newest-cni-251602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 20:45:21.138556  941739 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1005 20:45:21.142440  941739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:45:21.154570  941739 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1005 20:45:21.157976  941739 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1005 20:45:21.158071  941739 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1005 20:45:21.178251  941739 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1005 20:45:21.178278  941739 docker.go:594] Images already preloaded, skipping extraction
	I1005 20:45:21.178347  941739 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1005 20:45:21.197723  941739 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1005 20:45:21.197759  941739 cache_images.go:84] Images are preloaded, skipping loading
	I1005 20:45:21.197823  941739 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1005 20:45:21.251580  941739 cni.go:84] Creating CNI manager for ""
	I1005 20:45:21.251616  941739 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1005 20:45:21.251639  941739 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1005 20:45:21.251658  941739 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-251602 NodeName:newest-cni-251602 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1005 20:45:21.251840  941739 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-251602"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1005 20:45:21.251930  941739 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-251602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:newest-cni-251602 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1005 20:45:21.251984  941739 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1005 20:45:21.260656  941739 binaries.go:44] Found k8s binaries, skipping transfer
	I1005 20:45:21.260726  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1005 20:45:21.269056  941739 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
	I1005 20:45:21.286089  941739 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1005 20:45:21.302730  941739 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1005 20:45:21.319579  941739 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1005 20:45:21.322925  941739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:45:21.333438  941739 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602 for IP: 192.168.67.2
	I1005 20:45:21.333472  941739 certs.go:190] acquiring lock for shared ca certs: {Name:mka6627fa5c31076c5fa233a6bbda946476bff5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:45:21.333619  941739 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17363-491115/.minikube/ca.key
	I1005 20:45:21.333654  941739 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.key
	I1005 20:45:21.333737  941739 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/client.key
	I1005 20:45:21.333791  941739 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/apiserver.key.c7fa3a9e
	I1005 20:45:21.333823  941739 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/proxy-client.key
	I1005 20:45:21.333912  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926.pem (1338 bytes)
	W1005 20:45:21.333938  941739 certs.go:433] ignoring /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926_empty.pem, impossibly tiny 0 bytes
	I1005 20:45:21.333949  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca-key.pem (1671 bytes)
	I1005 20:45:21.333973  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/ca.pem (1082 bytes)
	I1005 20:45:21.334008  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/cert.pem (1123 bytes)
	I1005 20:45:21.334047  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/certs/home/jenkins/minikube-integration/17363-491115/.minikube/certs/key.pem (1679 bytes)
	I1005 20:45:21.334102  941739 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem (1708 bytes)
	I1005 20:45:21.334741  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1005 20:45:21.357132  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1005 20:45:21.379412  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1005 20:45:21.402402  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/newest-cni-251602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1005 20:45:21.425553  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1005 20:45:21.448572  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1005 20:45:21.470803  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1005 20:45:21.492671  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1005 20:45:21.514617  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/certs/497926.pem --> /usr/share/ca-certificates/497926.pem (1338 bytes)
	I1005 20:45:21.537065  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/ssl/certs/4979262.pem --> /usr/share/ca-certificates/4979262.pem (1708 bytes)
	I1005 20:45:21.559657  941739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-491115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1005 20:45:21.582144  941739 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1005 20:45:21.598672  941739 ssh_runner.go:195] Run: openssl version
	I1005 20:45:21.604061  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/497926.pem && ln -fs /usr/share/ca-certificates/497926.pem /etc/ssl/certs/497926.pem"
	I1005 20:45:21.613694  941739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/497926.pem
	I1005 20:45:21.617122  941739 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  5 20:07 /usr/share/ca-certificates/497926.pem
	I1005 20:45:21.617186  941739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/497926.pem
	I1005 20:45:21.623795  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/497926.pem /etc/ssl/certs/51391683.0"
	I1005 20:45:21.632192  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4979262.pem && ln -fs /usr/share/ca-certificates/4979262.pem /etc/ssl/certs/4979262.pem"
	I1005 20:45:21.641540  941739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4979262.pem
	I1005 20:45:21.644804  941739 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  5 20:07 /usr/share/ca-certificates/4979262.pem
	I1005 20:45:21.644853  941739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4979262.pem
	I1005 20:45:21.651399  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4979262.pem /etc/ssl/certs/3ec20f2e.0"
	I1005 20:45:21.659734  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1005 20:45:21.668779  941739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:45:21.672400  941739 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  5 20:03 /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:45:21.672473  941739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:45:21.678971  941739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1005 20:45:21.688374  941739 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1005 20:45:21.691701  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1005 20:45:21.698446  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1005 20:45:21.704585  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1005 20:45:21.710930  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1005 20:45:21.717269  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1005 20:45:21.723706  941739 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1005 20:45:21.730244  941739 kubeadm.go:404] StartCluster: {Name:newest-cni-251602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-251602 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:45:21.730390  941739 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1005 20:45:21.749238  941739 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1005 20:45:21.757704  941739 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1005 20:45:21.757777  941739 kubeadm.go:636] restartCluster start
	I1005 20:45:21.757833  941739 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1005 20:45:21.766002  941739 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:21.766568  941739 kubeconfig.go:135] verify returned: extract IP: "newest-cni-251602" does not appear in /home/jenkins/minikube-integration/17363-491115/kubeconfig
	I1005 20:45:21.766798  941739 kubeconfig.go:146] "newest-cni-251602" context is missing from /home/jenkins/minikube-integration/17363-491115/kubeconfig - will repair!
	I1005 20:45:21.767178  941739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/kubeconfig: {Name:mkd6618cb8d42fbccf8ec108c3891f3e690ff249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:45:21.768584  941739 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1005 20:45:21.777081  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:21.777142  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:21.786498  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:21.786517  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:21.786555  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:21.795849  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:22.296543  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:22.296643  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:22.307113  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:22.796806  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:22.796920  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:22.807658  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:23.296196  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:23.296307  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:23.307063  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:23.796660  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:23.796750  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:23.807326  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:24.296919  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:24.297003  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:24.307595  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:24.796497  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:24.796585  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:24.807169  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:25.296770  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:25.296888  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:25.307546  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:25.796061  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:25.796166  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:25.806783  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:26.296330  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:26.296433  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:26.307074  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:26.796470  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:26.796577  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:26.806786  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:27.296331  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:27.296415  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:27.306522  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:27.796815  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:27.796927  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:27.807056  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:28.296676  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:28.296772  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:28.307093  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:28.796685  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:28.796792  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:28.807035  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:29.296656  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:29.296766  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:29.306878  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:29.796676  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:29.796758  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:29.807141  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:30.296755  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:30.296850  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:30.306907  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:30.796266  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:30.796377  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:30.806636  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:31.296136  941739 api_server.go:166] Checking apiserver status ...
	I1005 20:45:31.296248  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 20:45:31.306343  941739 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:31.778147  941739 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1005 20:45:31.778197  941739 kubeadm.go:1128] stopping kube-system containers ...
	I1005 20:45:31.778276  941739 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1005 20:45:31.799139  941739 docker.go:463] Stopping containers: [edbeda11d2dc 9f2cb55357e2 f6bfaab5a6ac 67260f7c09c8 fd1feebd6b30 dd66b2b22702 01b6b78a55a3 b4382c5ea59f 12ffa1278374 5918e2e006de 5959b2ce7826 84e04d4b3dda 2fee3456f3f2 4f3db88655d7]
	I1005 20:45:31.799221  941739 ssh_runner.go:195] Run: docker stop edbeda11d2dc 9f2cb55357e2 f6bfaab5a6ac 67260f7c09c8 fd1feebd6b30 dd66b2b22702 01b6b78a55a3 b4382c5ea59f 12ffa1278374 5918e2e006de 5959b2ce7826 84e04d4b3dda 2fee3456f3f2 4f3db88655d7
	I1005 20:45:31.819269  941739 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1005 20:45:31.831589  941739 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1005 20:45:31.840562  941739 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Oct  5 20:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Oct  5 20:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Oct  5 20:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct  5 20:44 /etc/kubernetes/scheduler.conf
	
	I1005 20:45:31.840635  941739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1005 20:45:31.848959  941739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1005 20:45:31.857521  941739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1005 20:45:31.865912  941739 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:31.865992  941739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1005 20:45:31.874539  941739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1005 20:45:31.882971  941739 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1005 20:45:31.883036  941739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1005 20:45:31.891165  941739 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1005 20:45:31.899809  941739 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1005 20:45:31.899844  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:31.950458  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:32.439655  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:32.588235  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:32.644120  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:32.740951  941739 api_server.go:52] waiting for apiserver process to appear ...
	I1005 20:45:32.741029  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:45:32.753615  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:45:33.330126  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:45:33.829788  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:45:33.846461  941739 api_server.go:72] duration metric: took 1.105507442s to wait for apiserver process to appear ...
	I1005 20:45:33.846542  941739 api_server.go:88] waiting for apiserver healthz status ...
	I1005 20:45:33.846578  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:33.846977  941739 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1005 20:45:33.847055  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:33.847357  941739 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1005 20:45:34.348075  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:36.627973  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1005 20:45:36.628063  941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1005 20:45:36.628087  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:36.740856  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1005 20:45:36.740956  941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 20:45:36.848296  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:36.853601  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1005 20:45:36.853628  941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 20:45:37.348237  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:37.352593  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1005 20:45:37.352618  941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 20:45:37.847923  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:37.852873  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1005 20:45:37.852902  941739 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 20:45:38.348152  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:38.354442  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1005 20:45:38.363755  941739 api_server.go:141] control plane version: v1.28.2
	I1005 20:45:38.363785  941739 api_server.go:131] duration metric: took 4.517223524s to wait for apiserver health ...
	I1005 20:45:38.363796  941739 cni.go:84] Creating CNI manager for ""
	I1005 20:45:38.363807  941739 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1005 20:45:38.365566  941739 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1005 20:45:38.366945  941739 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1005 20:45:38.375605  941739 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1005 20:45:38.418968  941739 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 20:45:38.430492  941739 system_pods.go:59] 8 kube-system pods found
	I1005 20:45:38.430531  941739 system_pods.go:61] "coredns-5dd5756b68-bm584" [0aa18475-85e2-44fd-b2f3-bea8e676ae2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:45:38.430541  941739 system_pods.go:61] "etcd-newest-cni-251602" [34417493-e814-4f29-b447-2863b3cfcf27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1005 20:45:38.430550  941739 system_pods.go:61] "kube-apiserver-newest-cni-251602" [ea09c35f-5b0a-4a6f-b11c-8a5516fbd861] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1005 20:45:38.430560  941739 system_pods.go:61] "kube-controller-manager-newest-cni-251602" [4636875d-c7bf-4080-a173-2ea829bdbbc3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1005 20:45:38.430571  941739 system_pods.go:61] "kube-proxy-vtq52" [c6349e67-7d8d-4cca-9b07-1eb70a41bb60] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:45:38.430603  941739 system_pods.go:61] "kube-scheduler-newest-cni-251602" [c3320fb3-4468-4f4c-ac6e-3d3aa7c4af0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1005 20:45:38.430617  941739 system_pods.go:61] "metrics-server-57f55c9bc5-75jt5" [6455e407-161e-4abe-94a4-8fb5968789b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1005 20:45:38.430631  941739 system_pods.go:61] "storage-provisioner" [50c99270-263a-466f-ae78-8da1c3fe7545] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:45:38.430641  941739 system_pods.go:74] duration metric: took 11.652857ms to wait for pod list to return data ...
	I1005 20:45:38.430649  941739 node_conditions.go:102] verifying NodePressure condition ...
	I1005 20:45:38.435489  941739 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1005 20:45:38.435522  941739 node_conditions.go:123] node cpu capacity is 8
	I1005 20:45:38.435538  941739 node_conditions.go:105] duration metric: took 4.879676ms to run NodePressure ...
	I1005 20:45:38.435565  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1005 20:45:38.709413  941739 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1005 20:45:38.718207  941739 ops.go:34] apiserver oom_adj: -16
	I1005 20:45:38.718235  941739 kubeadm.go:640] restartCluster took 16.960444278s
	I1005 20:45:38.718247  941739 kubeadm.go:406] StartCluster complete in 16.988017482s
	I1005 20:45:38.718274  941739 settings.go:142] acquiring lock: {Name:mk74c5e95d8c9fcaf06097e6d304129504752ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:45:38.718351  941739 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17363-491115/kubeconfig
	I1005 20:45:38.719220  941739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-491115/kubeconfig: {Name:mkd6618cb8d42fbccf8ec108c3891f3e690ff249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:45:38.719473  941739 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1005 20:45:38.719630  941739 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1005 20:45:38.719714  941739 config.go:182] Loaded profile config "newest-cni-251602": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1005 20:45:38.719720  941739 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-251602"
	I1005 20:45:38.719738  941739 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-251602"
	W1005 20:45:38.719746  941739 addons.go:240] addon storage-provisioner should already be in state true
	I1005 20:45:38.719745  941739 addons.go:69] Setting metrics-server=true in profile "newest-cni-251602"
	I1005 20:45:38.719747  941739 addons.go:69] Setting default-storageclass=true in profile "newest-cni-251602"
	I1005 20:45:38.719763  941739 addons.go:231] Setting addon metrics-server=true in "newest-cni-251602"
	W1005 20:45:38.719772  941739 addons.go:240] addon metrics-server should already be in state true
	I1005 20:45:38.719786  941739 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-251602"
	I1005 20:45:38.719799  941739 host.go:66] Checking if "newest-cni-251602" exists ...
	I1005 20:45:38.719813  941739 host.go:66] Checking if "newest-cni-251602" exists ...
	I1005 20:45:38.719800  941739 addons.go:69] Setting dashboard=true in profile "newest-cni-251602"
	I1005 20:45:38.719834  941739 addons.go:231] Setting addon dashboard=true in "newest-cni-251602"
	W1005 20:45:38.719843  941739 addons.go:240] addon dashboard should already be in state true
	I1005 20:45:38.719903  941739 host.go:66] Checking if "newest-cni-251602" exists ...
	I1005 20:45:38.720124  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:38.720279  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:38.720282  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:38.720344  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:38.723756  941739 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-251602" context rescaled to 1 replicas
	I1005 20:45:38.723804  941739 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1005 20:45:38.727049  941739 out.go:177] * Verifying Kubernetes components...
	I1005 20:45:38.728767  941739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:45:38.743950  941739 addons.go:231] Setting addon default-storageclass=true in "newest-cni-251602"
	W1005 20:45:38.744161  941739 addons.go:240] addon default-storageclass should already be in state true
	I1005 20:45:38.744212  941739 host.go:66] Checking if "newest-cni-251602" exists ...
	I1005 20:45:38.744748  941739 cli_runner.go:164] Run: docker container inspect newest-cni-251602 --format={{.State.Status}}
	I1005 20:45:38.761605  941739 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1005 20:45:38.762994  941739 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1005 20:45:38.764361  941739 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1005 20:45:38.762962  941739 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1005 20:45:38.766924  941739 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:45:38.765746  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1005 20:45:38.765763  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1005 20:45:38.768302  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:38.768331  941739 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 20:45:38.768349  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1005 20:45:38.768396  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:38.768481  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1005 20:45:38.768528  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:38.771651  941739 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1005 20:45:38.771678  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1005 20:45:38.771838  941739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-251602
	I1005 20:45:38.792082  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:38.798427  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:38.803117  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:38.806797  941739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/newest-cni-251602/id_rsa Username:docker}
	I1005 20:45:38.847194  941739 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1005 20:45:38.847278  941739 api_server.go:52] waiting for apiserver process to appear ...
	I1005 20:45:38.847344  941739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:45:38.926982  941739 api_server.go:72] duration metric: took 203.134329ms to wait for apiserver process to appear ...
	I1005 20:45:38.927013  941739 api_server.go:88] waiting for apiserver healthz status ...
	I1005 20:45:38.927033  941739 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 20:45:38.931963  941739 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1005 20:45:38.933196  941739 api_server.go:141] control plane version: v1.28.2
	I1005 20:45:38.933257  941739 api_server.go:131] duration metric: took 6.235518ms to wait for apiserver health ...
	I1005 20:45:38.933268  941739 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 20:45:38.938837  941739 system_pods.go:59] 8 kube-system pods found
	I1005 20:45:38.938869  941739 system_pods.go:61] "coredns-5dd5756b68-bm584" [0aa18475-85e2-44fd-b2f3-bea8e676ae2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:45:38.938882  941739 system_pods.go:61] "etcd-newest-cni-251602" [34417493-e814-4f29-b447-2863b3cfcf27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1005 20:45:38.938893  941739 system_pods.go:61] "kube-apiserver-newest-cni-251602" [ea09c35f-5b0a-4a6f-b11c-8a5516fbd861] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1005 20:45:38.938906  941739 system_pods.go:61] "kube-controller-manager-newest-cni-251602" [4636875d-c7bf-4080-a173-2ea829bdbbc3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1005 20:45:38.938913  941739 system_pods.go:61] "kube-proxy-vtq52" [c6349e67-7d8d-4cca-9b07-1eb70a41bb60] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:45:38.938919  941739 system_pods.go:61] "kube-scheduler-newest-cni-251602" [c3320fb3-4468-4f4c-ac6e-3d3aa7c4af0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1005 20:45:38.938932  941739 system_pods.go:61] "metrics-server-57f55c9bc5-75jt5" [6455e407-161e-4abe-94a4-8fb5968789b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1005 20:45:38.938943  941739 system_pods.go:61] "storage-provisioner" [50c99270-263a-466f-ae78-8da1c3fe7545] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:45:38.938955  941739 system_pods.go:74] duration metric: took 5.679606ms to wait for pod list to return data ...
	I1005 20:45:38.938967  941739 default_sa.go:34] waiting for default service account to be created ...
	I1005 20:45:38.941596  941739 default_sa.go:45] found service account: "default"
	I1005 20:45:38.941625  941739 default_sa.go:55] duration metric: took 2.647466ms for default service account to be created ...
	I1005 20:45:38.941638  941739 kubeadm.go:581] duration metric: took 217.801105ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1005 20:45:38.941657  941739 node_conditions.go:102] verifying NodePressure condition ...
	I1005 20:45:38.944359  941739 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1005 20:45:38.944385  941739 node_conditions.go:123] node cpu capacity is 8
	I1005 20:45:38.944399  941739 node_conditions.go:105] duration metric: took 2.735534ms to run NodePressure ...
	I1005 20:45:38.944414  941739 start.go:228] waiting for startup goroutines ...
	I1005 20:45:39.031121  941739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 20:45:39.031835  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1005 20:45:39.031864  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1005 20:45:39.037663  941739 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1005 20:45:39.037689  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1005 20:45:39.038028  941739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1005 20:45:39.052055  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1005 20:45:39.052084  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1005 20:45:39.122929  941739 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1005 20:45:39.122960  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1005 20:45:39.135708  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1005 20:45:39.135738  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1005 20:45:39.148797  941739 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1005 20:45:39.148828  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1005 20:45:39.233123  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1005 20:45:39.233156  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1005 20:45:39.246996  941739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1005 20:45:39.325634  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1005 20:45:39.325672  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1005 20:45:39.348115  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1005 20:45:39.348137  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1005 20:45:39.436685  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1005 20:45:39.436712  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1005 20:45:39.528259  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1005 20:45:39.528287  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1005 20:45:39.547672  941739 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1005 20:45:39.547706  941739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1005 20:45:39.565975  941739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1005 20:45:40.443947  941739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.412776284s)
	I1005 20:45:40.444070  941739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.406004214s)
	I1005 20:45:40.571364  941739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.324316067s)
	I1005 20:45:40.571417  941739 addons.go:467] Verifying addon metrics-server=true in "newest-cni-251602"
	I1005 20:45:40.917851  941739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.351821533s)
	I1005 20:45:40.919845  941739 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-251602 addons enable metrics-server	
	
	
	I1005 20:45:40.921418  941739 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1005 20:45:40.922771  941739 addons.go:502] enable addons completed in 2.203154287s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1005 20:45:40.922805  941739 start.go:233] waiting for cluster config update ...
	I1005 20:45:40.922816  941739 start.go:242] writing updated cluster config ...
	I1005 20:45:40.923059  941739 ssh_runner.go:195] Run: rm -f paused
	I1005 20:45:40.970862  941739 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1005 20:45:40.972904  941739 out.go:177] * Done! kubectl is now configured to use "newest-cni-251602" cluster and "default" namespace by default
	I1005 20:46:27.657681  848852 system_pods.go:86] 7 kube-system pods found
	I1005 20:46:27.657726  848852 system_pods.go:89] "coredns-5644d7b6d9-k2f47" [dd4e5395-fbf1-4504-89ad-703d2ccd8f92] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 20:46:27.657736  848852 system_pods.go:89] "etcd-old-k8s-version-330869" [2112dddc-14b1-4f50-b36d-5ce7c191db8d] Running
	I1005 20:46:27.657741  848852 system_pods.go:89] "kube-apiserver-old-k8s-version-330869" [0897cd75-2a81-4e47-b71a-a41af9f23d48] Running
	I1005 20:46:27.657747  848852 system_pods.go:89] "kube-controller-manager-old-k8s-version-330869" [6c11b639-3464-496e-8d91-372e8b5f9501] Running
	I1005 20:46:27.657753  848852 system_pods.go:89] "kube-proxy-n9cwb" [16d00055-ce0e-43f2-9a1e-cf089271eb10] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1005 20:46:27.657758  848852 system_pods.go:89] "kube-scheduler-old-k8s-version-330869" [078e37d1-ef80-4819-b90e-e7049cd64712] Running
	I1005 20:46:27.657766  848852 system_pods.go:89] "storage-provisioner" [0d554ea9-aabf-4709-8ce8-d665afecaac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1005 20:46:27.660128  848852 out.go:177] 
	W1005 20:46:27.662010  848852 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns, kube-proxy
	W1005 20:46:27.662024  848852 out.go:239] * 
	W1005 20:46:27.662801  848852 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1005 20:46:27.665143  848852 out.go:177] 
	
	* 
	* ==> Docker <==
	* Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.741750210Z" level=info msg="Loading containers: start."
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.835828175Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.873800369Z" level=info msg="Loading containers: done."
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.883733188Z" level=info msg="Docker daemon" commit=1a79695 graphdriver=overlay2 version=24.0.6
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.883797436Z" level=info msg="Daemon has completed initialization"
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.908207956Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 05 20:37:00 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:00.908224826Z" level=info msg="API listen on [::]:2376"
	Oct 05 20:37:00 old-k8s-version-330869 systemd[1]: Started Docker Application Container Engine.
	Oct 05 20:37:04 old-k8s-version-330869 systemd[1]: Stopping Docker Application Container Engine...
	Oct 05 20:37:04 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:04.385384032Z" level=info msg="Processing signal 'terminated'"
	Oct 05 20:37:04 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:04.387134388Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 05 20:37:04 old-k8s-version-330869 dockerd[1156]: time="2023-10-05T20:37:04.388068483Z" level=info msg="Daemon shutdown complete"
	Oct 05 20:37:04 old-k8s-version-330869 systemd[1]: docker.service: Deactivated successfully.
	Oct 05 20:37:04 old-k8s-version-330869 systemd[1]: Stopped Docker Application Container Engine.
	Oct 05 20:37:04 old-k8s-version-330869 systemd[1]: Starting Docker Application Container Engine...
	Oct 05 20:37:04 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:04.451611505Z" level=info msg="Starting up"
	Oct 05 20:37:04 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:04.461647092Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 05 20:37:06 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:06.744132135Z" level=info msg="Loading containers: start."
	Oct 05 20:37:06 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:06.839612411Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.001066181Z" level=info msg="Loading containers: done."
	Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.016241859Z" level=info msg="Docker daemon" commit=1a79695 graphdriver=overlay2 version=24.0.6
	Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.016300931Z" level=info msg="Daemon has completed initialization"
	Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.039742052Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 05 20:37:07 old-k8s-version-330869 dockerd[1366]: time="2023-10-05T20:37:07.039779377Z" level=info msg="API listen on [::]:2376"
	Oct 05 20:37:07 old-k8s-version-330869 systemd[1]: Started Docker Application Container Engine.
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	134727f163f16       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   About a minute ago   Running             busybox                   0                   a4acd2c611f56       busybox
	56472dff5f81f       6e38f40d628db                                                                                         16 minutes ago       Running             storage-provisioner       0                   64075514dc163       storage-provisioner
	9f0be33584868       bf261d1579144                                                                                         16 minutes ago       Running             coredns                   0                   2e7135e437f0c       coredns-5644d7b6d9-k2f47
	cef84f5b51c49       c21b0c7400f98                                                                                         16 minutes ago       Running             kube-proxy                0                   a228f4c03cdba       kube-proxy-n9cwb
	530e42b9f6c77       b2756210eeabf                                                                                         17 minutes ago       Running             etcd                      0                   7ab4e42c79a68       etcd-old-k8s-version-330869
	6c66019a6e010       06a629a7e51cd                                                                                         17 minutes ago       Running             kube-controller-manager   0                   e87f561b15eaf       kube-controller-manager-old-k8s-version-330869
	a576da8318f84       301ddc62b80b1                                                                                         17 minutes ago       Running             kube-scheduler            0                   84631805dc0e9       kube-scheduler-old-k8s-version-330869
	91420fd2d357f       b305571ca60a5                                                                                         17 minutes ago       Running             kube-apiserver            0                   a2a65ce6717dd       kube-apiserver-old-k8s-version-330869
	
	* 
	* ==> coredns [9f0be3358486] <==
	* .:53
	2023-10-05T20:37:40.506Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-10-05T20:37:40.507Z [INFO] CoreDNS-1.6.2
	2023-10-05T20:37:40.507Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-330869
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-330869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53
	                    minikube.k8s.io/name=old-k8s-version-330869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_05T20_37_23_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Oct 2023 20:37:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Oct 2023 20:54:21 +0000   Thu, 05 Oct 2023 20:37:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Oct 2023 20:54:21 +0000   Thu, 05 Oct 2023 20:37:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Oct 2023 20:54:21 +0000   Thu, 05 Oct 2023 20:37:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Oct 2023 20:54:21 +0000   Thu, 05 Oct 2023 20:52:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-330869
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304681132Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32859420Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304681132Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32859420Ki
	 pods:               110
	System Info:
	 Machine ID:                 da3d4e78336e4de3801cc5f1121e363a
	 System UUID:                fb98631f-d977-49f6-8d13-47582452d2b5
	 Boot ID:                    1c650140-d8f3-4a50-ac83-e0e6baf94598
	 Kernel Version:             5.15.0-1044-gcp
	 OS Image:                   Ubuntu 22.04.3 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                coredns-5644d7b6d9-k2f47                          100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     16m
	  kube-system                etcd-old-k8s-version-330869                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                kube-apiserver-old-k8s-version-330869             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                kube-controller-manager-old-k8s-version-330869    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                kube-proxy-n9cwb                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                kube-scheduler-old-k8s-version-330869             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet, old-k8s-version-330869     Node old-k8s-version-330869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet, old-k8s-version-330869     Node old-k8s-version-330869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet, old-k8s-version-330869     Node old-k8s-version-330869 status is now: NodeHasSufficientPID
	  Normal  Starting                 16m                kube-proxy, old-k8s-version-330869  Starting kube-proxy.
	  Normal  NodeReady                111s               kubelet, old-k8s-version-330869     Node old-k8s-version-330869 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0e ee c2 a6 29 ac 08 06
	[Oct 5 20:39] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev bridge
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 fb 5f c9 9e d7 08 06
	[  +0.715332] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e 50 38 95 e7 63 08 06
	[  +8.065920] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 69 10 43 1f 0b 08 06
	[ +16.180606] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 6a 13 59 d9 da 08 06
	[Oct 5 20:43] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff ae 52 50 a9 6f 53 08 06
	[Oct 5 20:44] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 46 f6 d1 58 d2 08 06
	[ +19.224580] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e 9e 9c 80 d0 43 08 06
	[  +8.732079] IPv4: martian source 10.244.0.1 from 10.244.0.9, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 96 3b 8d 2f b2 6f 08 06
	[  +1.563207] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 1a 9a 54 1a fc 08 06
	[  +5.814222] IPv4: martian source 10.244.0.1 from 10.244.0.9, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea e4 45 a0 bd b2 08 06
	[Oct 5 20:45] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f2 32 f7 4c 9e 13 08 06
	[ +35.890083] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba 7a 81 76 84 0a 08 06
	
	* 
	* ==> etcd [530e42b9f6c7] <==
	* 2023-10-05 20:37:13.543597 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-10-05 20:37:13.544562 I | etcdserver: 9f0758e1c58a86ed as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-05 20:37:13.544945 I | etcdserver/membership: added member 9f0758e1c58a86ed [https://192.168.85.2:2380] to cluster 68eaea490fab4e05
	2023-10-05 20:37:13.546416 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-05 20:37:13.546557 I | embed: listening for metrics on http://192.168.85.2:2381
	2023-10-05 20:37:13.546670 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-05 20:37:14.535543 I | raft: 9f0758e1c58a86ed is starting a new election at term 1
	2023-10-05 20:37:14.535587 I | raft: 9f0758e1c58a86ed became candidate at term 2
	2023-10-05 20:37:14.535619 I | raft: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	2023-10-05 20:37:14.535634 I | raft: 9f0758e1c58a86ed became leader at term 2
	2023-10-05 20:37:14.535644 I | raft: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2023-10-05 20:37:14.535840 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-05 20:37:14.536888 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-05 20:37:14.536932 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-05 20:37:14.536948 I | embed: ready to serve client requests
	2023-10-05 20:37:14.536984 I | etcdserver: published {Name:old-k8s-version-330869 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2023-10-05 20:37:14.537012 I | embed: ready to serve client requests
	2023-10-05 20:37:14.538573 I | embed: serving client requests on 192.168.85.2:2379
	2023-10-05 20:37:14.538614 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-05 20:37:52.212401 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-k2f47\" " with result "range_response_count:1 size:1693" took too long (122.897349ms) to execute
	2023-10-05 20:37:52.212479 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:215" took too long (125.865716ms) to execute
	2023-10-05 20:47:14.658642 I | mvcc: store.index: compact 592
	2023-10-05 20:47:14.659659 I | mvcc: finished scheduled compaction at 592 (took 692.439µs)
	2023-10-05 20:52:14.662212 I | mvcc: store.index: compact 837
	2023-10-05 20:52:14.663105 I | mvcc: finished scheduled compaction at 837 (took 586.403µs)
	
	* 
	* ==> kernel <==
	*  20:54:32 up  2:37,  0 users,  load average: 0.24, 0.60, 1.69
	Linux old-k8s-version-330869 5.15.0-1044-gcp #52~20.04.1-Ubuntu SMP Wed Sep 20 16:25:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [91420fd2d357] <==
	* I1005 20:37:18.648312       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
	E1005 20:37:18.650296       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.85.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1005 20:37:18.651420       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
	I1005 20:37:18.651503       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1005 20:37:18.747656       1 cache.go:39] Caches are synced for autoregister controller
	I1005 20:37:18.749173       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I1005 20:37:18.762082       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1005 20:37:18.762123       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1005 20:37:18.844252       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1005 20:37:19.647651       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I1005 20:37:19.647684       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1005 20:37:19.647699       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1005 20:37:19.651492       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I1005 20:37:19.654366       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I1005 20:37:19.654390       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1005 20:37:21.429015       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1005 20:37:21.708924       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1005 20:37:22.050878       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1005 20:37:22.051722       1 controller.go:606] quota admission added evaluator for: endpoints
	I1005 20:37:22.934960       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1005 20:37:23.322631       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1005 20:37:23.671803       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1005 20:37:38.427328       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1005 20:37:38.453764       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1005 20:37:38.539579       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [6c66019a6e01] <==
	* I1005 20:37:38.482277       1 shared_informer.go:204] Caches are synced for stateful set 
	I1005 20:37:38.487462       1 shared_informer.go:204] Caches are synced for ReplicationController 
	I1005 20:37:38.487721       1 shared_informer.go:204] Caches are synced for GC 
	I1005 20:37:38.487734       1 shared_informer.go:204] Caches are synced for PVC protection 
	I1005 20:37:38.487705       1 shared_informer.go:204] Caches are synced for attach detach 
	I1005 20:37:38.512949       1 shared_informer.go:204] Caches are synced for ReplicaSet 
	I1005 20:37:38.537588       1 shared_informer.go:204] Caches are synced for deployment 
	I1005 20:37:38.537985       1 shared_informer.go:204] Caches are synced for resource quota 
	I1005 20:37:38.542695       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"59e37402-092c-492a-8a24-0e86b565f6d7", APIVersion:"apps/v1", ResourceVersion:"192", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2
	I1005 20:37:38.550402       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"09eba7cb-b0a0-47ac-8889-0bda35b1e552", APIVersion:"apps/v1", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-k2f47
	I1005 20:37:38.553758       1 shared_informer.go:204] Caches are synced for expand 
	I1005 20:37:38.559594       1 shared_informer.go:204] Caches are synced for resource quota 
	I1005 20:37:38.560551       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"09eba7cb-b0a0-47ac-8889-0bda35b1e552", APIVersion:"apps/v1", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-wmjhd
	I1005 20:37:38.588856       1 shared_informer.go:204] Caches are synced for disruption 
	I1005 20:37:38.588883       1 disruption.go:341] Sending events to api server.
	I1005 20:37:38.605013       1 shared_informer.go:204] Caches are synced for persistent volume 
	I1005 20:37:38.646306       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1005 20:37:38.681988       1 shared_informer.go:204] Caches are synced for service account 
	I1005 20:37:38.683218       1 shared_informer.go:204] Caches are synced for namespace 
	I1005 20:37:38.686440       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1005 20:37:38.686460       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1005 20:37:38.941357       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"59e37402-092c-492a-8a24-0e86b565f6d7", APIVersion:"apps/v1", ResourceVersion:"341", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-5644d7b6d9 to 1
	I1005 20:37:38.999527       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"09eba7cb-b0a0-47ac-8889-0bda35b1e552", APIVersion:"apps/v1", ResourceVersion:"342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-5644d7b6d9-wmjhd
	I1005 20:40:53.441120       1 node_lifecycle_controller.go:1058] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1005 20:52:43.475150       1 node_lifecycle_controller.go:1085] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [cef84f5b51c4] <==
	* W1005 20:37:40.040149       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1005 20:37:40.052818       1 node.go:135] Successfully retrieved node IP: 192.168.85.2
	I1005 20:37:40.052863       1 server_others.go:149] Using iptables Proxier.
	I1005 20:37:40.053425       1 server.go:529] Version: v1.16.0
	I1005 20:37:40.053948       1 config.go:131] Starting endpoints config controller
	I1005 20:37:40.053980       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1005 20:37:40.054066       1 config.go:313] Starting service config controller
	I1005 20:37:40.054081       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1005 20:37:40.157367       1 shared_informer.go:204] Caches are synced for service config 
	I1005 20:37:40.224240       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [a576da8318f8] <==
	* I1005 20:37:18.743716       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1005 20:37:18.744158       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1005 20:37:18.842482       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1005 20:37:18.843126       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 20:37:18.843177       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 20:37:18.843316       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1005 20:37:18.843330       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1005 20:37:18.843386       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1005 20:37:18.843440       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1005 20:37:18.843521       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1005 20:37:18.843846       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1005 20:37:18.844748       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1005 20:37:18.927555       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1005 20:37:19.843756       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1005 20:37:19.844669       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 20:37:19.845775       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 20:37:19.846714       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1005 20:37:19.850359       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1005 20:37:19.919069       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1005 20:37:19.919980       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1005 20:37:19.927483       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1005 20:37:19.928788       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1005 20:37:19.929881       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1005 20:37:19.931691       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1005 20:46:29.532005       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* Oct 05 20:51:21 old-k8s-version-330869 kubelet[2004]: I1005 20:51:21.468505    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 13m41.650972067s ago; threshold is 3m0s
	Oct 05 20:51:26 old-k8s-version-330869 kubelet[2004]: I1005 20:51:26.468759    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 13m46.651231708s ago; threshold is 3m0s
	Oct 05 20:51:31 old-k8s-version-330869 kubelet[2004]: I1005 20:51:31.468983    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 13m51.651446046s ago; threshold is 3m0s
	Oct 05 20:51:36 old-k8s-version-330869 kubelet[2004]: I1005 20:51:36.469271    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 13m56.651735191s ago; threshold is 3m0s
	Oct 05 20:51:41 old-k8s-version-330869 kubelet[2004]: I1005 20:51:41.469526    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m1.651995326s ago; threshold is 3m0s
	Oct 05 20:51:46 old-k8s-version-330869 kubelet[2004]: I1005 20:51:46.469737    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m6.652207391s ago; threshold is 3m0s
	Oct 05 20:51:51 old-k8s-version-330869 kubelet[2004]: I1005 20:51:51.469958    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m11.652428101s ago; threshold is 3m0s
	Oct 05 20:51:56 old-k8s-version-330869 kubelet[2004]: I1005 20:51:56.470209    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m16.652673957s ago; threshold is 3m0s
	Oct 05 20:52:01 old-k8s-version-330869 kubelet[2004]: I1005 20:52:01.470458    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m21.652921421s ago; threshold is 3m0s
	Oct 05 20:52:06 old-k8s-version-330869 kubelet[2004]: I1005 20:52:06.470666    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m26.65313406s ago; threshold is 3m0s
	Oct 05 20:52:11 old-k8s-version-330869 kubelet[2004]: I1005 20:52:11.470891    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m31.653359263s ago; threshold is 3m0s
	Oct 05 20:52:16 old-k8s-version-330869 kubelet[2004]: I1005 20:52:16.471119    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m36.653584922s ago; threshold is 3m0s
	Oct 05 20:52:21 old-k8s-version-330869 kubelet[2004]: I1005 20:52:21.471359    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m41.653827014s ago; threshold is 3m0s
	Oct 05 20:52:26 old-k8s-version-330869 kubelet[2004]: I1005 20:52:26.471596    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m46.654063899s ago; threshold is 3m0s
	Oct 05 20:52:31 old-k8s-version-330869 kubelet[2004]: I1005 20:52:31.471791    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m51.654265006s ago; threshold is 3m0s
	Oct 05 20:52:36 old-k8s-version-330869 kubelet[2004]: I1005 20:52:36.472027    2004 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 14m56.654488765s ago; threshold is 3m0s
	Oct 05 20:52:39 old-k8s-version-330869 kubelet[2004]: E1005 20:52:39.019435    2004 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-5644d7b6d9-wmjhd": operation timeout: context deadline exceeded
	Oct 05 20:52:39 old-k8s-version-330869 kubelet[2004]: E1005 20:52:39.019533    2004 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-5644d7b6d9-wmjhd_kube-system(9cbed5dd-4684-4f3c-93d3-75465aeebcdc)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-5644d7b6d9-wmjhd": operation timeout: context deadline exceeded
	Oct 05 20:52:39 old-k8s-version-330869 kubelet[2004]: E1005 20:52:39.019547    2004 kuberuntime_manager.go:710] createPodSandbox for pod "coredns-5644d7b6d9-wmjhd_kube-system(9cbed5dd-4684-4f3c-93d3-75465aeebcdc)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-5644d7b6d9-wmjhd": operation timeout: context deadline exceeded
	Oct 05 20:52:39 old-k8s-version-330869 kubelet[2004]: E1005 20:52:39.019658    2004 pod_workers.go:191] Error syncing pod 9cbed5dd-4684-4f3c-93d3-75465aeebcdc ("coredns-5644d7b6d9-wmjhd_kube-system(9cbed5dd-4684-4f3c-93d3-75465aeebcdc)"), skipping: failed to "CreatePodSandbox" for "coredns-5644d7b6d9-wmjhd_kube-system(9cbed5dd-4684-4f3c-93d3-75465aeebcdc)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-5644d7b6d9-wmjhd_kube-system(9cbed5dd-4684-4f3c-93d3-75465aeebcdc)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"coredns-5644d7b6d9-wmjhd\": operation timeout: context deadline exceeded"
	Oct 05 20:52:40 old-k8s-version-330869 kubelet[2004]: E1005 20:52:40.128896    2004 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "ce73ce4ed74195a85e09711fb49bcd0f0491f2710055da732259ae9b224a6ed6" for pod "coredns-5644d7b6d9-wmjhd_kube-system(9cbed5dd-4684-4f3c-93d3-75465aeebcdc)" error: rpc error: code = DeadlineExceeded desc = context deadline exceeded
	Oct 05 20:52:42 old-k8s-version-330869 kubelet[2004]: I1005 20:52:42.992874    2004 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-r58hx" (UniqueName: "kubernetes.io/secret/f8bda2b8-a4ff-4a22-bcdd-86323959b312-default-token-r58hx") pod "busybox" (UID: "f8bda2b8-a4ff-4a22-bcdd-86323959b312")
	Oct 05 20:52:43 old-k8s-version-330869 kubelet[2004]: W1005 20:52:43.430842    2004 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
	Oct 05 20:53:12 old-k8s-version-330869 kubelet[2004]: E1005 20:53:12.166992    2004 remote_runtime.go:128] StopPodSandbox "ce73ce4ed74195a85e09711fb49bcd0f0491f2710055da732259ae9b224a6ed6" from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
	Oct 05 20:53:12 old-k8s-version-330869 kubelet[2004]: E1005 20:53:12.167043    2004 kuberuntime_gc.go:170] Failed to stop sandbox "ce73ce4ed74195a85e09711fb49bcd0f0491f2710055da732259ae9b224a6ed6" before removing: rpc error: code = DeadlineExceeded desc = context deadline exceeded
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-330869 -n old-k8s-version-330869
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-330869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-5644d7b6d9-k2f47 storage-provisioner
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-330869 describe pod busybox coredns-5644d7b6d9-k2f47 storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-330869 describe pod busybox coredns-5644d7b6d9-k2f47 storage-provisioner: exit status 1 (66.37076ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             old-k8s-version-330869/192.168.85.2
	Start Time:       Thu, 05 Oct 2023 20:52:42 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r58hx (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  default-token-r58hx:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-r58hx
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  8m3s                  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
	  Warning  FailedScheduling  6m44s (x1 over 8m3s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
	  Normal   Scheduled         110s                  default-scheduler  Successfully assigned default/busybox to old-k8s-version-330869
	  Normal   Pulling           110s                  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal   Pulled            109s                  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal   Created           109s                  kubelet            Created container busybox
	  Normal   Started           109s                  kubelet            Started container busybox

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-k2f47" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-330869 describe pod busybox coredns-5644d7b6d9-k2f47 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (483.86s)

                                                
                                    

Test pass (300/322)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 4.59
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.28.2/json-events 5.26
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.2
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
18 TestDownloadOnlyKic 1.21
19 TestBinaryMirror 0.72
20 TestOffline 96.74
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
25 TestAddons/Setup 105.57
27 TestAddons/parallel/Registry 43.94
28 TestAddons/parallel/Ingress 50.13
29 TestAddons/parallel/InspektorGadget 48.19
30 TestAddons/parallel/MetricsServer 5.86
31 TestAddons/parallel/HelmTiller 9.93
33 TestAddons/parallel/CSI 55.93
34 TestAddons/parallel/Headlamp 39.61
35 TestAddons/parallel/CloudSpanner 5.44
36 TestAddons/parallel/LocalPath 9.74
39 TestAddons/serial/GCPAuth/Namespaces 0.14
40 TestAddons/StoppedEnableDisable 11.08
41 TestCertOptions 28.86
42 TestCertExpiration 233.48
43 TestDockerFlags 27.21
44 TestForceSystemdFlag 29.35
45 TestForceSystemdEnv 29.43
47 TestKVMDriverInstallOrUpdate 1.39
51 TestErrorSpam/setup 25.18
52 TestErrorSpam/start 0.6
53 TestErrorSpam/status 0.87
54 TestErrorSpam/pause 1.15
55 TestErrorSpam/unpause 1.2
56 TestErrorSpam/stop 10.88
59 TestFunctional/serial/CopySyncFile 0
60 TestFunctional/serial/StartWithProxy 40.19
61 TestFunctional/serial/AuditLog 0
62 TestFunctional/serial/SoftStart 36
63 TestFunctional/serial/KubeContext 0.05
64 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/add_remote 2.24
68 TestFunctional/serial/CacheCmd/cache/add_local 0.97
69 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
70 TestFunctional/serial/CacheCmd/cache/list 0.04
71 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
72 TestFunctional/serial/CacheCmd/cache/cache_reload 1.28
73 TestFunctional/serial/CacheCmd/cache/delete 0.09
74 TestFunctional/serial/MinikubeKubectlCmd 0.11
75 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
76 TestFunctional/serial/ExtraConfig 36.35
77 TestFunctional/serial/ComponentHealth 0.07
78 TestFunctional/serial/LogsCmd 1.07
79 TestFunctional/serial/LogsFileCmd 1.05
80 TestFunctional/serial/InvalidService 4.17
82 TestFunctional/parallel/ConfigCmd 0.36
83 TestFunctional/parallel/DashboardCmd 28.65
84 TestFunctional/parallel/DryRun 0.4
85 TestFunctional/parallel/InternationalLanguage 0.19
86 TestFunctional/parallel/StatusCmd 1
90 TestFunctional/parallel/ServiceCmdConnect 9.64
91 TestFunctional/parallel/AddonsCmd 0.12
92 TestFunctional/parallel/PersistentVolumeClaim 39.38
94 TestFunctional/parallel/SSHCmd 0.64
95 TestFunctional/parallel/CpCmd 1.4
96 TestFunctional/parallel/MySQL 26.11
97 TestFunctional/parallel/FileSync 0.28
98 TestFunctional/parallel/CertSync 1.67
102 TestFunctional/parallel/NodeLabels 0.06
104 TestFunctional/parallel/NonActiveRuntimeDisabled 0.29
106 TestFunctional/parallel/License 0.16
107 TestFunctional/parallel/ServiceCmd/DeployApp 10.22
109 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
110 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
112 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.42
113 TestFunctional/parallel/ServiceCmd/List 0.49
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
115 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
119 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
120 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
121 TestFunctional/parallel/DockerEnv/bash 1
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
123 TestFunctional/parallel/ServiceCmd/Format 0.38
124 TestFunctional/parallel/ServiceCmd/URL 0.38
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
128 TestFunctional/parallel/Version/short 0.06
129 TestFunctional/parallel/Version/components 0.51
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
131 TestFunctional/parallel/MountCmd/any-port 18.31
132 TestFunctional/parallel/ProfileCmd/profile_list 0.33
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
134 TestFunctional/parallel/MountCmd/specific-port 1.95
135 TestFunctional/parallel/MountCmd/VerifyCleanup 2.02
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
140 TestFunctional/parallel/ImageCommands/ImageBuild 2.08
141 TestFunctional/parallel/ImageCommands/Setup 1.01
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.22
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.61
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.92
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.74
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.43
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.01
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.94
149 TestFunctional/delete_addon-resizer_images 0.07
150 TestFunctional/delete_my-image_image 0.02
151 TestFunctional/delete_minikube_cached_images 0.02
155 TestImageBuild/serial/Setup 21.99
156 TestImageBuild/serial/NormalBuild 1.15
157 TestImageBuild/serial/BuildWithBuildArg 0.82
158 TestImageBuild/serial/BuildWithDockerIgnore 0.6
159 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.57
162 TestIngressAddonLegacy/StartLegacyK8sCluster 60.75
164 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 9.9
165 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.51
166 TestIngressAddonLegacy/serial/ValidateIngressAddons 39.15
169 TestJSONOutput/start/Command 40.53
170 TestJSONOutput/start/Audit 0
172 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/pause/Command 0.52
176 TestJSONOutput/pause/Audit 0
178 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/unpause/Command 0.46
182 TestJSONOutput/unpause/Audit 0
184 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/stop/Command 10.93
188 TestJSONOutput/stop/Audit 0
190 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
192 TestErrorJSONOutput 0.2
194 TestKicCustomNetwork/create_custom_network 27.28
195 TestKicCustomNetwork/use_default_bridge_network 24.23
196 TestKicExistingNetwork 24.42
197 TestKicCustomSubnet 27.1
198 TestKicStaticIP 26.9
199 TestMainNoArgs 0.04
200 TestMinikubeProfile 57.2
203 TestMountStart/serial/StartWithMountFirst 8.83
204 TestMountStart/serial/VerifyMountFirst 0.24
205 TestMountStart/serial/StartWithMountSecond 6.06
206 TestMountStart/serial/VerifyMountSecond 0.24
207 TestMountStart/serial/DeleteFirst 1.47
208 TestMountStart/serial/VerifyMountPostDelete 0.25
209 TestMountStart/serial/Stop 1.19
210 TestMountStart/serial/RestartStopped 7.23
211 TestMountStart/serial/VerifyMountPostStop 0.25
214 TestMultiNode/serial/FreshStart2Nodes 69.18
215 TestMultiNode/serial/DeployApp2Nodes 3.84
216 TestMultiNode/serial/PingHostFrom2Pods 0.8
217 TestMultiNode/serial/AddNode 18.13
218 TestMultiNode/serial/ProfileList 0.28
219 TestMultiNode/serial/CopyFile 9.14
220 TestMultiNode/serial/StopNode 2.14
221 TestMultiNode/serial/StartAfterStop 12.1
222 TestMultiNode/serial/RestartKeepsNodes 111.94
223 TestMultiNode/serial/DeleteNode 4.74
224 TestMultiNode/serial/StopMultiNode 21.57
225 TestMultiNode/serial/RestartMultiNode 84.71
226 TestMultiNode/serial/ValidateNameConflict 27.19
231 TestPreload 114.55
233 TestScheduledStopUnix 98.72
234 TestSkaffold 99.08
236 TestInsufficientStorage 13.28
237 TestRunningBinaryUpgrade 85.34
239 TestKubernetesUpgrade 188.72
240 TestMissingContainerUpgrade 120.73
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
243 TestStoppedBinaryUpgrade/Setup 0.5
244 TestNoKubernetes/serial/StartWithK8s 42.15
245 TestStoppedBinaryUpgrade/Upgrade 68.32
246 TestNoKubernetes/serial/StartWithStopK8s 16.89
247 TestNoKubernetes/serial/Start 8.36
248 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
249 TestNoKubernetes/serial/ProfileList 1.44
250 TestStoppedBinaryUpgrade/MinikubeLogs 1.44
251 TestNoKubernetes/serial/Stop 1.22
252 TestNoKubernetes/serial/StartNoArgs 8.3
253 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
273 TestPause/serial/Start 50.27
274 TestPause/serial/SecondStartNoReconfiguration 37.15
275 TestPause/serial/Pause 0.54
276 TestPause/serial/VerifyStatus 0.29
277 TestPause/serial/Unpause 0.49
278 TestPause/serial/PauseAgain 0.72
279 TestPause/serial/DeletePaused 2.2
280 TestPause/serial/VerifyDeletedResources 15.58
281 TestNetworkPlugins/group/auto/Start 52.64
282 TestNetworkPlugins/group/kindnet/Start 42.07
283 TestNetworkPlugins/group/auto/KubeletFlags 0.27
284 TestNetworkPlugins/group/auto/NetCatPod 10.29
285 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
286 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
287 TestNetworkPlugins/group/auto/DNS 0.21
288 TestNetworkPlugins/group/auto/Localhost 0.16
289 TestNetworkPlugins/group/kindnet/NetCatPod 11.34
290 TestNetworkPlugins/group/auto/HairPin 0.23
291 TestNetworkPlugins/group/calico/Start 72.65
292 TestNetworkPlugins/group/kindnet/DNS 0.28
293 TestNetworkPlugins/group/kindnet/Localhost 0.14
294 TestNetworkPlugins/group/kindnet/HairPin 0.15
295 TestNetworkPlugins/group/custom-flannel/Start 58.64
296 TestNetworkPlugins/group/false/Start 73.55
297 TestNetworkPlugins/group/enable-default-cni/Start 39.04
298 TestNetworkPlugins/group/calico/ControllerPod 5.02
299 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
300 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.27
301 TestNetworkPlugins/group/calico/KubeletFlags 0.27
302 TestNetworkPlugins/group/calico/NetCatPod 9.34
303 TestNetworkPlugins/group/custom-flannel/DNS 0.18
304 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
305 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
306 TestNetworkPlugins/group/calico/DNS 0.2
307 TestNetworkPlugins/group/calico/Localhost 0.15
308 TestNetworkPlugins/group/calico/HairPin 0.16
309 TestNetworkPlugins/group/false/KubeletFlags 0.35
310 TestNetworkPlugins/group/false/NetCatPod 11.38
311 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
312 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.35
313 TestNetworkPlugins/group/flannel/Start 41.63
314 TestNetworkPlugins/group/bridge/Start 81.34
315 TestNetworkPlugins/group/false/DNS 0.24
316 TestNetworkPlugins/group/enable-default-cni/DNS 33.21
317 TestNetworkPlugins/group/false/Localhost 0.16
318 TestNetworkPlugins/group/false/HairPin 0.15
319 TestNetworkPlugins/group/kubenet/Start 80.46
320 TestNetworkPlugins/group/flannel/ControllerPod 8.03
321 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
322 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
324 TestNetworkPlugins/group/flannel/NetCatPod 10.34
325 TestNetworkPlugins/group/flannel/DNS 0.2
326 TestNetworkPlugins/group/flannel/Localhost 0.23
327 TestNetworkPlugins/group/flannel/HairPin 0.19
331 TestStartStop/group/no-preload/serial/FirstStart 87.82
332 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
333 TestNetworkPlugins/group/bridge/NetCatPod 12.31
334 TestNetworkPlugins/group/bridge/DNS 0.17
335 TestNetworkPlugins/group/bridge/Localhost 0.15
336 TestNetworkPlugins/group/bridge/HairPin 0.15
337 TestNetworkPlugins/group/kubenet/KubeletFlags 0.34
338 TestNetworkPlugins/group/kubenet/NetCatPod 11.41
340 TestStartStop/group/embed-certs/serial/FirstStart 41.21
341 TestNetworkPlugins/group/kubenet/DNS 0.17
342 TestNetworkPlugins/group/kubenet/Localhost 0.15
343 TestNetworkPlugins/group/kubenet/HairPin 0.16
345 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 43.65
346 TestStartStop/group/embed-certs/serial/DeployApp 7.38
347 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
348 TestStartStop/group/embed-certs/serial/Stop 10.78
349 TestStartStop/group/no-preload/serial/DeployApp 8.37
350 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.89
351 TestStartStop/group/no-preload/serial/Stop 10.71
352 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.3
353 TestStartStop/group/embed-certs/serial/SecondStart 312.54
354 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.32
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
356 TestStartStop/group/no-preload/serial/SecondStart 338.28
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
358 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.95
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.31
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 314.01
361 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.02
362 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
363 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
364 TestStartStop/group/embed-certs/serial/Pause 2.52
366 TestStartStop/group/newest-cni/serial/FirstStart 40.9
367 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.02
368 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.03
369 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
370 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.34
371 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.6
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
373 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.33
374 TestStartStop/group/no-preload/serial/Pause 2.58
375 TestStartStop/group/newest-cni/serial/DeployApp 0
376 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.87
377 TestStartStop/group/newest-cni/serial/Stop 10.82
378 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
379 TestStartStop/group/newest-cni/serial/SecondStart 26.92
380 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
381 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
383 TestStartStop/group/newest-cni/serial/Pause 2.53
385 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.72
386 TestStartStop/group/old-k8s-version/serial/Stop 11.87
387 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
388 TestStartStop/group/old-k8s-version/serial/SecondStart 419.62
389 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
390 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
391 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
392 TestStartStop/group/old-k8s-version/serial/Pause 2.32
x
+
TestDownloadOnly/v1.16.0/json-events (4.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-228269 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-228269 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.586689046s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (4.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-228269
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-228269: exit status 85 (60.594045ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-228269 | jenkins | v1.31.2 | 05 Oct 23 20:02 UTC |          |
	|         | -p download-only-228269        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 20:02:53
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 20:02:53.486431  497938 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:02:53.486716  497938 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:02:53.486726  497938 out.go:309] Setting ErrFile to fd 2...
	I1005 20:02:53.486731  497938 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:02:53.486956  497938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-491115/.minikube/bin
	W1005 20:02:53.487105  497938 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17363-491115/.minikube/config/config.json: open /home/jenkins/minikube-integration/17363-491115/.minikube/config/config.json: no such file or directory
	I1005 20:02:53.487788  497938 out.go:303] Setting JSON to true
	I1005 20:02:53.488894  497938 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6322,"bootTime":1696529852,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:02:53.488962  497938 start.go:138] virtualization: kvm guest
	I1005 20:02:53.492642  497938 out.go:97] [download-only-228269] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:02:53.494466  497938 out.go:169] MINIKUBE_LOCATION=17363
	I1005 20:02:53.492827  497938 notify.go:220] Checking for updates...
	W1005 20:02:53.492840  497938 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball: no such file or directory
	I1005 20:02:53.497412  497938 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:02:53.499140  497938 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig
	I1005 20:02:53.500721  497938 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube
	I1005 20:02:53.502217  497938 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1005 20:02:53.505018  497938 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1005 20:02:53.505343  497938 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:02:53.528089  497938 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:02:53.528202  497938 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:02:53.580039  497938 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-05 20:02:53.571467576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:02:53.580198  497938 docker.go:294] overlay module found
	I1005 20:02:53.582250  497938 out.go:97] Using the docker driver based on user configuration
	I1005 20:02:53.582279  497938 start.go:298] selected driver: docker
	I1005 20:02:53.582285  497938 start.go:902] validating driver "docker" against <nil>
	I1005 20:02:53.582381  497938 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:02:53.635151  497938 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-05 20:02:53.627046914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:02:53.635320  497938 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 20:02:53.635778  497938 start_flags.go:384] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1005 20:02:53.635924  497938 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1005 20:02:53.638181  497938 out.go:169] Using Docker driver with root privileges
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-228269"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (5.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-228269 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-228269 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.261539626s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (5.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-228269
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-228269: exit status 85 (59.925205ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-228269 | jenkins | v1.31.2 | 05 Oct 23 20:02 UTC |          |
	|         | -p download-only-228269        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-228269 | jenkins | v1.31.2 | 05 Oct 23 20:02 UTC |          |
	|         | -p download-only-228269        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 20:02:58
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 20:02:58.137013  498083 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:02:58.137139  498083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:02:58.137148  498083 out.go:309] Setting ErrFile to fd 2...
	I1005 20:02:58.137152  498083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:02:58.137362  498083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-491115/.minikube/bin
	W1005 20:02:58.137485  498083 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17363-491115/.minikube/config/config.json: open /home/jenkins/minikube-integration/17363-491115/.minikube/config/config.json: no such file or directory
	I1005 20:02:58.137935  498083 out.go:303] Setting JSON to true
	I1005 20:02:58.138879  498083 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6326,"bootTime":1696529852,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:02:58.138949  498083 start.go:138] virtualization: kvm guest
	I1005 20:02:58.141463  498083 out.go:97] [download-only-228269] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:02:58.143184  498083 out.go:169] MINIKUBE_LOCATION=17363
	I1005 20:02:58.141674  498083 notify.go:220] Checking for updates...
	I1005 20:02:58.144910  498083 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:02:58.146506  498083 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig
	I1005 20:02:58.148223  498083 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube
	I1005 20:02:58.149829  498083 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1005 20:02:58.153000  498083 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1005 20:02:58.153503  498083 config.go:182] Loaded profile config "download-only-228269": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1005 20:02:58.153599  498083 start.go:810] api.Load failed for download-only-228269: filestore "download-only-228269": Docker machine "download-only-228269" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1005 20:02:58.153689  498083 driver.go:378] Setting default libvirt URI to qemu:///system
	W1005 20:02:58.153717  498083 start.go:810] api.Load failed for download-only-228269: filestore "download-only-228269": Docker machine "download-only-228269" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1005 20:02:58.176268  498083 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:02:58.176358  498083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:02:58.228378  498083 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2023-10-05 20:02:58.219992294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:02:58.228473  498083 docker.go:294] overlay module found
	I1005 20:02:58.230504  498083 out.go:97] Using the docker driver based on existing profile
	I1005 20:02:58.230550  498083 start.go:298] selected driver: docker
	I1005 20:02:58.230556  498083 start.go:902] validating driver "docker" against &{Name:download-only-228269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-228269 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:02:58.230716  498083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:02:58.282192  498083 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2023-10-05 20:02:58.273940337 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:02:58.282921  498083 cni.go:84] Creating CNI manager for ""
	I1005 20:02:58.282951  498083 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1005 20:02:58.282974  498083 start_flags.go:321] config:
	{Name:download-only-228269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-228269 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:02:58.285079  498083 out.go:97] Starting control plane node download-only-228269 in cluster download-only-228269
	I1005 20:02:58.285116  498083 cache.go:122] Beginning downloading kic base image for docker with docker
	I1005 20:02:58.286869  498083 out.go:97] Pulling base image ...
	I1005 20:02:58.286900  498083 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1005 20:02:58.286956  498083 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 20:02:58.302769  498083 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1005 20:02:58.302927  498083 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1005 20:02:58.302945  498083 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory, skipping pull
	I1005 20:02:58.302950  498083 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in cache, skipping pull
	I1005 20:02:58.302967  498083 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	I1005 20:02:58.321817  498083 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1005 20:02:58.321865  498083 cache.go:57] Caching tarball of preloaded images
	I1005 20:02:58.322063  498083 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1005 20:02:58.324479  498083 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1005 20:02:58.324523  498083 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I1005 20:02:58.360263  498083 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4?checksum=md5:30a5cb95ef165c1e9196502a3ab2be2b -> /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1005 20:03:01.874187  498083 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I1005 20:03:01.874316  498083 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17363-491115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I1005 20:03:02.732099  498083 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1005 20:03:02.732268  498083 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/download-only-228269/config.json ...
	I1005 20:03:02.732507  498083 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1005 20:03:02.732746  498083 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17363-491115/.minikube/cache/linux/amd64/v1.28.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-228269"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-228269
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.21s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-340082 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-340082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-340082
--- PASS: TestDownloadOnlyKic (1.21s)

                                                
                                    
x
+
TestBinaryMirror (0.72s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-899334 --alsologtostderr --binary-mirror http://127.0.0.1:39901 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-899334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-899334
--- PASS: TestBinaryMirror (0.72s)

                                                
                                    
x
+
TestOffline (96.74s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-896990 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-896990 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m34.414584485s)
helpers_test.go:175: Cleaning up "offline-docker-896990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-896990
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-896990: (2.328961459s)
--- PASS: TestOffline (96.74s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:926: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-432328
addons_test.go:926: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-432328: exit status 85 (46.645663ms)

                                                
                                                
-- stdout --
	* Profile "addons-432328" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-432328"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:937: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-432328
addons_test.go:937: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-432328: exit status 85 (47.503059ms)

                                                
                                                
-- stdout --
	* Profile "addons-432328" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-432328"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (105.57s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-432328 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-432328 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m45.567785395s)
--- PASS: TestAddons/Setup (105.57s)

                                                
                                    
x
+
TestAddons/parallel/Registry (43.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 14.16093ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-nstct" [94fd85a7-42b4-450b-b55b-de17ee286864] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.013788805s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-l95rr" [6bf12404-9b36-4263-8963-b2cf1e029c5f] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.014287635s
addons_test.go:338: (dbg) Run:  kubectl --context addons-432328 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-432328 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Done: kubectl --context addons-432328 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (33.097353123s)
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-432328 ip
2023/10/05 20:05:34 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-432328 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (43.94s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (50.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-432328 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-432328 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:230: (dbg) Done: kubectl --context addons-432328 replace --force -f testdata/nginx-ingress-v1.yaml: (1.003397012s)
addons_test.go:243: (dbg) Run:  kubectl --context addons-432328 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [277f2eee-e61e-44c7-81e9-f1942b14d1f8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [277f2eee-e61e-44c7-81e9-f1942b14d1f8] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 39.010003396s
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-432328 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-432328 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-432328 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-432328 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-432328 addons disable ingress-dns --alsologtostderr -v=1: (1.190844859s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-432328 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-432328 addons disable ingress --alsologtostderr -v=1: (7.608100105s)
--- PASS: TestAddons/parallel/Ingress (50.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (48.19s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:836: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-f527k" [d09e7e34-9ce7-4af7-a403-a59632093dc4] Running
addons_test.go:836: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012611219s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-432328
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-432328: (43.17425216s)
--- PASS: TestAddons/parallel/InspektorGadget (48.19s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 14.421069ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-454kw" [973acd21-daf0-4b5b-a70a-661b1da9b756] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.013270121s
addons_test.go:413: (dbg) Run:  kubectl --context addons-432328 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-432328 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.86s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.93s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:454: tiller-deploy stabilized in 13.296262ms
addons_test.go:456: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-dfh9l" [574707d5-b097-4bbd-8742-22c11c626d1f] Running
addons_test.go:456: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.013270131s
addons_test.go:471: (dbg) Run:  kubectl --context addons-432328 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:471: (dbg) Done: kubectl --context addons-432328 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.282774028s)
addons_test.go:488: (dbg) Run:  out/minikube-linux-amd64 -p addons-432328 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.93s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.93s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: csi-hostpath-driver pods stabilized in 32.449644ms
addons_test.go:562: (dbg) Run:  kubectl --context addons-432328 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-432328 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a1e86573-aa54-4c44-8919-5c8b3e238a15] Pending
helpers_test.go:344: "task-pv-pod" [a1e86573-aa54-4c44-8919-5c8b3e238a15] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a1e86573-aa54-4c44-8919-5c8b3e238a15] Running
addons_test.go:577: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.009872365s
addons_test.go:582: (dbg) Run:  kubectl --context addons-432328 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-432328 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-432328 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-432328 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-432328 delete pod task-pv-pod
addons_test.go:598: (dbg) Run:  kubectl --context addons-432328 delete pvc hpvc
addons_test.go:604: (dbg) Run:  kubectl --context addons-432328 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:614: (dbg) Run:  kubectl --context addons-432328 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:619: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [bb62b302-a89d-4a84-90e2-80c51e42d0e1] Pending
helpers_test.go:344: "task-pv-pod-restore" [bb62b302-a89d-4a84-90e2-80c51e42d0e1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [bb62b302-a89d-4a84-90e2-80c51e42d0e1] Running
addons_test.go:619: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.009674626s
addons_test.go:624: (dbg) Run:  kubectl --context addons-432328 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Run:  kubectl --context addons-432328 delete pvc hpvc-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-432328 delete volumesnapshot new-snapshot-demo
addons_test.go:636: (dbg) Run:  out/minikube-linux-amd64 -p addons-432328 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:636: (dbg) Done: out/minikube-linux-amd64 -p addons-432328 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.521008024s)
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-432328 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (55.93s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (39.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:822: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-432328 --alsologtostderr -v=1
addons_test.go:822: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-432328 --alsologtostderr -v=1: (1.598819692s)
addons_test.go:827: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-z4mhl" [2e3c8a2b-2977-4a8d-a85c-5849a12ff3b0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-z4mhl" [2e3c8a2b-2977-4a8d-a85c-5849a12ff3b0] Running
addons_test.go:827: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 38.012913843s
--- PASS: TestAddons/parallel/Headlamp (39.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:855: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-bczbz" [650c145d-ca6a-4c67-830b-c1599eb5fc7b] Running
addons_test.go:855: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009132488s
addons_test.go:858: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-432328
--- PASS: TestAddons/parallel/CloudSpanner (5.44s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.74s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:871: (dbg) Run:  kubectl --context addons-432328 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:877: (dbg) Run:  kubectl --context addons-432328 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:881: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-432328 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:884: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4f8d2f69-60fe-43e8-ab6d-114fadd67ab1] Pending
helpers_test.go:344: "test-local-path" [4f8d2f69-60fe-43e8-ab6d-114fadd67ab1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4f8d2f69-60fe-43e8-ab6d-114fadd67ab1] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4f8d2f69-60fe-43e8-ab6d-114fadd67ab1] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:884: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.008949447s
addons_test.go:889: (dbg) Run:  kubectl --context addons-432328 get pvc test-pvc -o=json
addons_test.go:898: (dbg) Run:  out/minikube-linux-amd64 -p addons-432328 ssh "cat /opt/local-path-provisioner/pvc-d914f2bf-b02e-434e-b936-3eb3b8db2cea_default_test-pvc/file1"
addons_test.go:910: (dbg) Run:  kubectl --context addons-432328 delete pod test-local-path
addons_test.go:914: (dbg) Run:  kubectl --context addons-432328 delete pvc test-pvc
addons_test.go:918: (dbg) Run:  out/minikube-linux-amd64 -p addons-432328 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.74s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:648: (dbg) Run:  kubectl --context addons-432328 create ns new-namespace
addons_test.go:662: (dbg) Run:  kubectl --context addons-432328 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.08s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-432328
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-432328: (10.843936781s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-432328
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-432328
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-432328
--- PASS: TestAddons/StoppedEnableDisable (11.08s)

                                                
                                    
x
+
TestCertOptions (28.86s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-193402 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-193402 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (26.161990631s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-193402 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-193402 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-193402 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-193402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-193402
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-193402: (2.079494964s)
--- PASS: TestCertOptions (28.86s)

                                                
                                    
x
+
TestCertExpiration (233.48s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-191011 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E1005 20:31:15.238487  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-191011 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (28.09155363s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-191011 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E1005 20:34:51.412150  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
E1005 20:34:52.190867  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-191011 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (23.129305012s)
helpers_test.go:175: Cleaning up "cert-expiration-191011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-191011
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-191011: (2.261673298s)
--- PASS: TestCertExpiration (233.48s)

                                                
                                    
x
+
TestDockerFlags (27.21s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-611028 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1005 20:32:24.173623  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-611028 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.53380748s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-611028 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-611028 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-611028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-611028
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-611028: (2.113656741s)
--- PASS: TestDockerFlags (27.21s)

                                                
                                    
x
+
TestForceSystemdFlag (29.35s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-124153 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-124153 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.374802101s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-124153 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-124153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-124153
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-124153: (3.50311691s)
--- PASS: TestForceSystemdFlag (29.35s)

                                                
                                    
x
+
TestForceSystemdEnv (29.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-347131 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-347131 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (26.668781523s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-347131 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-347131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-347131
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-347131: (2.361867673s)
--- PASS: TestForceSystemdEnv (29.43s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.39s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.39s)

                                                
                                    
x
+
TestErrorSpam/setup (25.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-839205 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-839205 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-839205 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-839205 --driver=docker  --container-runtime=docker: (25.179345538s)
--- PASS: TestErrorSpam/setup (25.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.6s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839205 --log_dir /tmp/nospam-839205 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839205 --log_dir /tmp/nospam-839205 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839205 --log_dir /tmp/nospam-839205 start --dry-run
--- PASS: TestErrorSpam/start (0.60s)

                                                
                                    
x
+
TestErrorSpam/status (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839205 --log_dir /tmp/nospam-839205 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839205 --log_dir /tmp/nospam-839205 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839205 --log_dir /tmp/nospam-839205 status
--- PASS: TestErrorSpam/status (0.87s)

                                                
                                    
x
+
TestErrorSpam/pause (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839205 --log_dir /tmp/nospam-839205 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839205 --log_dir /tmp/nospam-839205 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839205 --log_dir /tmp/nospam-839205 pause
--- PASS: TestErrorSpam/pause (1.15s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839205 --log_dir /tmp/nospam-839205 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839205 --log_dir /tmp/nospam-839205 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839205 --log_dir /tmp/nospam-839205 unpause
--- PASS: TestErrorSpam/unpause (1.20s)

                                                
                                    
x
+
TestErrorSpam/stop (10.88s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839205 --log_dir /tmp/nospam-839205 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-839205 --log_dir /tmp/nospam-839205 stop: (10.710547366s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839205 --log_dir /tmp/nospam-839205 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839205 --log_dir /tmp/nospam-839205 stop
--- PASS: TestErrorSpam/stop (10.88s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17363-491115/.minikube/files/etc/test/nested/copy/497926/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.19s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168323 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-168323 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (40.187351136s)
--- PASS: TestFunctional/serial/StartWithProxy (40.19s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168323 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-168323 --alsologtostderr -v=8: (35.99878593s)
functional_test.go:659: soft start took 35.999658416s for "functional-168323" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-168323 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-168323 /tmp/TestFunctionalserialCacheCmdcacheadd_local51923257/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 cache add minikube-local-cache-test:functional-168323
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 cache delete minikube-local-cache-test:functional-168323
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-168323
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168323 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (281.204158ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 kubectl -- --context functional-168323 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-168323 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.35s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168323 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-168323 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.354423467s)
functional_test.go:757: restart took 36.354551142s for "functional-168323" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.35s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-168323 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-168323 logs: (1.065276301s)
--- PASS: TestFunctional/serial/LogsCmd (1.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 logs --file /tmp/TestFunctionalserialLogsFileCmd2593236973/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-168323 logs --file /tmp/TestFunctionalserialLogsFileCmd2593236973/001/logs.txt: (1.04979127s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.05s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.17s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-168323 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-168323
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-168323: exit status 115 (336.312411ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30336 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-168323 delete -f testdata/invalidsvc.yaml
E1005 20:09:51.412266  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
E1005 20:09:51.418456  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
E1005 20:09:51.429154  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
E1005 20:09:51.449931  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
E1005 20:09:51.490242  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
E1005 20:09:51.570606  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
E1005 20:09:51.731015  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
--- PASS: TestFunctional/serial/InvalidService (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 config get cpus
E1005 20:09:52.051697  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168323 config get cpus: exit status 14 (63.159257ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168323 config get cpus: exit status 14 (56.323983ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (28.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-168323 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-168323 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 543663: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (28.65s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168323 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-168323 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (158.105862ms)

                                                
                                                
-- stdout --
	* [functional-168323] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:10:05.367498  542836 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:10:05.367765  542836 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:10:05.367775  542836 out.go:309] Setting ErrFile to fd 2...
	I1005 20:10:05.367780  542836 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:10:05.367973  542836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-491115/.minikube/bin
	I1005 20:10:05.368545  542836 out.go:303] Setting JSON to false
	I1005 20:10:05.369953  542836 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6754,"bootTime":1696529852,"procs":586,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:10:05.370037  542836 start.go:138] virtualization: kvm guest
	I1005 20:10:05.372854  542836 out.go:177] * [functional-168323] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:10:05.374666  542836 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:10:05.374691  542836 notify.go:220] Checking for updates...
	I1005 20:10:05.376406  542836 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:10:05.378314  542836 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig
	I1005 20:10:05.380198  542836 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube
	I1005 20:10:05.381962  542836 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:10:05.383511  542836 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:10:05.385468  542836 config.go:182] Loaded profile config "functional-168323": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1005 20:10:05.386013  542836 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:10:05.410181  542836 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:10:05.410289  542836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:10:05.469189  542836 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-10-05 20:10:05.459372605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:10:05.469355  542836 docker.go:294] overlay module found
	I1005 20:10:05.471637  542836 out.go:177] * Using the docker driver based on existing profile
	I1005 20:10:05.473380  542836 start.go:298] selected driver: docker
	I1005 20:10:05.473403  542836 start.go:902] validating driver "docker" against &{Name:functional-168323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-168323 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:10:05.473544  542836 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:10:05.476306  542836 out.go:177] 
	W1005 20:10:05.478580  542836 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1005 20:10:05.480207  542836 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168323 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-168323 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-168323 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (192.659321ms)

                                                
                                                
-- stdout --
	* [functional-168323] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:10:05.590977  542975 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:10:05.591318  542975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:10:05.591332  542975 out.go:309] Setting ErrFile to fd 2...
	I1005 20:10:05.591339  542975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:10:05.591791  542975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-491115/.minikube/bin
	I1005 20:10:05.592543  542975 out.go:303] Setting JSON to false
	I1005 20:10:05.594382  542975 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6754,"bootTime":1696529852,"procs":590,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:10:05.594474  542975 start.go:138] virtualization: kvm guest
	I1005 20:10:05.596862  542975 out.go:177] * [functional-168323] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I1005 20:10:05.599162  542975 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:10:05.599107  542975 notify.go:220] Checking for updates...
	I1005 20:10:05.602420  542975 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:10:05.604433  542975 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig
	I1005 20:10:05.606153  542975 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube
	I1005 20:10:05.608568  542975 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:10:05.610112  542975 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:10:05.612395  542975 config.go:182] Loaded profile config "functional-168323": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1005 20:10:05.613125  542975 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:10:05.639758  542975 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 20:10:05.639858  542975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:10:05.708979  542975 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-10-05 20:10:05.699491116 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:10:05.709074  542975 docker.go:294] overlay module found
	I1005 20:10:05.711144  542975 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1005 20:10:05.712813  542975 start.go:298] selected driver: docker
	I1005 20:10:05.712834  542975 start.go:902] validating driver "docker" against &{Name:functional-168323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-168323 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:10:05.712931  542975 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:10:05.715255  542975 out.go:177] 
	W1005 20:10:05.717090  542975 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1005 20:10:05.718892  542975 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-168323 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-168323 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-nnmv4" [07086e1a-4c0d-47af-a199-2b64b76072c4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E1005 20:09:53.973032  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
helpers_test.go:344: "hello-node-connect-55497b8b78-nnmv4" [07086e1a-4c0d-47af-a199-2b64b76072c4] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.010285079s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30490
functional_test.go:1674: http://192.168.49.2:30490: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-nnmv4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30490
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f3a21fa4-e537-4353-8352-a5c3b91b4110] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011630367s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-168323 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-168323 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-168323 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-168323 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b7c88283-f870-45db-810c-29be44fee70f] Pending
helpers_test.go:344: "sp-pod" [b7c88283-f870-45db-810c-29be44fee70f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1005 20:10:01.654649  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [b7c88283-f870-45db-810c-29be44fee70f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.050073231s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-168323 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-168323 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-168323 delete -f testdata/storage-provisioner/pod.yaml: (1.124806739s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-168323 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [81d9cc9a-f064-4b43-96ea-55d5e73791ba] Pending
helpers_test.go:344: "sp-pod" [81d9cc9a-f064-4b43-96ea-55d5e73791ba] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1005 20:10:11.895730  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [81d9cc9a-f064-4b43-96ea-55d5e73791ba] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.010532078s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-168323 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.38s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh -n functional-168323 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 cp functional-168323:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3774535740/001/cp-test.txt
E1005 20:09:52.692002  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh -n functional-168323 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-168323 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-sl2td" [c544ab20-6311-47b2-ae3a-767e2b0107bd] Pending
helpers_test.go:344: "mysql-859648c796-sl2td" [c544ab20-6311-47b2-ae3a-767e2b0107bd] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-sl2td" [c544ab20-6311-47b2-ae3a-767e2b0107bd] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.012062487s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-168323 exec mysql-859648c796-sl2td -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-168323 exec mysql-859648c796-sl2td -- mysql -ppassword -e "show databases;": exit status 1 (236.129987ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-168323 exec mysql-859648c796-sl2td -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-168323 exec mysql-859648c796-sl2td -- mysql -ppassword -e "show databases;": exit status 1 (216.974675ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-168323 exec mysql-859648c796-sl2td -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-168323 exec mysql-859648c796-sl2td -- mysql -ppassword -e "show databases;": exit status 1 (143.381823ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-168323 exec mysql-859648c796-sl2td -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.11s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/497926/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "sudo cat /etc/test/nested/copy/497926/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/497926.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "sudo cat /etc/ssl/certs/497926.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/497926.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "sudo cat /usr/share/ca-certificates/497926.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/4979262.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "sudo cat /etc/ssl/certs/4979262.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/4979262.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "sudo cat /usr/share/ca-certificates/4979262.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-168323 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168323 ssh "sudo systemctl is-active crio": exit status 1 (289.299279ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-168323 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-168323 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-cjmrq" [23af84d5-3bc7-44c7-83bc-1947f33906f8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-cjmrq" [23af84d5-3bc7-44c7-83bc-1947f33906f8] Running
E1005 20:09:56.533709  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.018858277s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-168323 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-168323 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-168323 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-168323 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 539408: os: process already finished
helpers_test.go:508: unable to kill pid 539026: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-168323 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-168323 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [2c7776cd-140b-4858-802a-ed28d6b508dd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [2c7776cd-140b-4858-802a-ed28d6b508dd] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.064553739s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-168323 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.95.134 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-168323 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 service list -o json
functional_test.go:1493: Took "501.704642ms" to run "out/minikube-linux-amd64 -p functional-168323 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-168323 docker-env) && out/minikube-linux-amd64 status -p functional-168323"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-168323 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30707
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30707
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 version -o=json --components
E1005 20:10:32.375900  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (18.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-168323 /tmp/TestFunctionalparallelMountCmdany-port539855214/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696536604461306025" to /tmp/TestFunctionalparallelMountCmdany-port539855214/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696536604461306025" to /tmp/TestFunctionalparallelMountCmdany-port539855214/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696536604461306025" to /tmp/TestFunctionalparallelMountCmdany-port539855214/001/test-1696536604461306025
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168323 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (307.377253ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  5 20:10 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  5 20:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  5 20:10 test-1696536604461306025
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh cat /mount-9p/test-1696536604461306025
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-168323 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [dc2a1420-8f0a-4286-910c-371d46ad3777] Pending
helpers_test.go:344: "busybox-mount" [dc2a1420-8f0a-4286-910c-371d46ad3777] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [dc2a1420-8f0a-4286-910c-371d46ad3777] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [dc2a1420-8f0a-4286-910c-371d46ad3777] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 15.010994389s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-168323 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-168323 /tmp/TestFunctionalparallelMountCmdany-port539855214/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (18.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "284.8406ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "43.700877ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "298.292042ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "50.437716ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-168323 /tmp/TestFunctionalparallelMountCmdspecific-port3778418122/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168323 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (312.541029ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-168323 /tmp/TestFunctionalparallelMountCmdspecific-port3778418122/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168323 ssh "sudo umount -f /mount-9p": exit status 1 (260.730794ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-168323 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-168323 /tmp/TestFunctionalparallelMountCmdspecific-port3778418122/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-168323 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1563837450/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-168323 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1563837450/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-168323 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1563837450/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168323 ssh "findmnt -T" /mount1: exit status 1 (508.179565ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-168323 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-168323 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1563837450/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-168323 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1563837450/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-168323 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1563837450/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-168323 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-168323
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-168323
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-168323 image ls --format short --alsologtostderr:
I1005 20:10:40.668428  547809 out.go:296] Setting OutFile to fd 1 ...
I1005 20:10:40.668572  547809 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:10:40.668585  547809 out.go:309] Setting ErrFile to fd 2...
I1005 20:10:40.668599  547809 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:10:40.668973  547809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-491115/.minikube/bin
I1005 20:10:40.669727  547809 config.go:182] Loaded profile config "functional-168323": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1005 20:10:40.669960  547809 config.go:182] Loaded profile config "functional-168323": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1005 20:10:40.670426  547809 cli_runner.go:164] Run: docker container inspect functional-168323 --format={{.State.Status}}
I1005 20:10:40.687532  547809 ssh_runner.go:195] Run: systemctl --version
I1005 20:10:40.687589  547809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-168323
I1005 20:10:40.707447  547809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/functional-168323/id_rsa Username:docker}
I1005 20:10:40.806198  547809 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-168323 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/google-containers/addon-resizer      | functional-168323 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | 61395b4c586da | 187MB  |
| docker.io/library/mysql                     | 5.7               | a5b7ceed40749 | 581MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/library/nginx                     | alpine            | d571254277f6a | 42.6MB |
| registry.k8s.io/kube-apiserver              | v1.28.2           | cdcab12b2dd16 | 126MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-168323 | 373bb87ad78fc | 30B    |
| registry.k8s.io/kube-proxy                  | v1.28.2           | c120fed2beb84 | 73.1MB |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 7a5d9d67a13f6 | 60.1MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 55f13c92defb1 | 122MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-168323 image ls --format table --alsologtostderr:
I1005 20:10:40.877609  547994 out.go:296] Setting OutFile to fd 1 ...
I1005 20:10:40.877724  547994 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:10:40.877734  547994 out.go:309] Setting ErrFile to fd 2...
I1005 20:10:40.877739  547994 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:10:40.877935  547994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-491115/.minikube/bin
I1005 20:10:40.878525  547994 config.go:182] Loaded profile config "functional-168323": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1005 20:10:40.878630  547994 config.go:182] Loaded profile config "functional-168323": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1005 20:10:40.879052  547994 cli_runner.go:164] Run: docker container inspect functional-168323 --format={{.State.Status}}
I1005 20:10:40.896928  547994 ssh_runner.go:195] Run: systemctl --version
I1005 20:10:40.896995  547994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-168323
I1005 20:10:40.917372  547994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/functional-168323/id_rsa Username:docker}
I1005 20:10:41.009712  547994 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-168323 image ls --format json --alsologtostderr:
[{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"60100000"},{"id":"73deb9a3f702532592a416
7455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"373bb87ad78fc40319a56f81b049e7026cc2734f53ba030ac74c6040f015cf0c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-168323"],"size":"30"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"126000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-168323"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"d571254277f6a0ba9d0c4a08f29b94476dcd4a95275bd484ece060ee4ff847e4
","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"73100000"},{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"122000000"},{"id":"a5b7ceed4074932a04ea553af3124bb03b249affe14899e2cd746d1a63e12ecc","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"581000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"
],"size":"240000"},{"id":"61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-168323 image ls --format json --alsologtostderr:
I1005 20:10:40.668286  547806 out.go:296] Setting OutFile to fd 1 ...
I1005 20:10:40.668408  547806 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:10:40.668419  547806 out.go:309] Setting ErrFile to fd 2...
I1005 20:10:40.668427  547806 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:10:40.668718  547806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-491115/.minikube/bin
I1005 20:10:40.669334  547806 config.go:182] Loaded profile config "functional-168323": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1005 20:10:40.669446  547806 config.go:182] Loaded profile config "functional-168323": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1005 20:10:40.669819  547806 cli_runner.go:164] Run: docker container inspect functional-168323 --format={{.State.Status}}
I1005 20:10:40.687537  547806 ssh_runner.go:195] Run: systemctl --version
I1005 20:10:40.687597  547806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-168323
I1005 20:10:40.715286  547806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/functional-168323/id_rsa Username:docker}
I1005 20:10:40.818124  547806 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-168323 image ls --format yaml --alsologtostderr:
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "122000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-168323
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 373bb87ad78fc40319a56f81b049e7026cc2734f53ba030ac74c6040f015cf0c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-168323
size: "30"
- id: d571254277f6a0ba9d0c4a08f29b94476dcd4a95275bd484ece060ee4ff847e4
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "126000000"
- id: a5b7ceed4074932a04ea553af3124bb03b249affe14899e2cd746d1a63e12ecc
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "581000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "60100000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "73100000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-168323 image ls --format yaml --alsologtostderr:
I1005 20:10:40.663480  547808 out.go:296] Setting OutFile to fd 1 ...
I1005 20:10:40.663627  547808 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:10:40.663633  547808 out.go:309] Setting ErrFile to fd 2...
I1005 20:10:40.663647  547808 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:10:40.663948  547808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-491115/.minikube/bin
I1005 20:10:40.664824  547808 config.go:182] Loaded profile config "functional-168323": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1005 20:10:40.664979  547808 config.go:182] Loaded profile config "functional-168323": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1005 20:10:40.665668  547808 cli_runner.go:164] Run: docker container inspect functional-168323 --format={{.State.Status}}
I1005 20:10:40.688850  547808 ssh_runner.go:195] Run: systemctl --version
I1005 20:10:40.688944  547808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-168323
I1005 20:10:40.708365  547808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/functional-168323/id_rsa Username:docker}
I1005 20:10:40.801550  547808 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-168323 ssh pgrep buildkitd: exit status 1 (283.521533ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image build -t localhost/my-image:functional-168323 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-168323 image build -t localhost/my-image:functional-168323 testdata/build --alsologtostderr: (1.595003695s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-168323 image build -t localhost/my-image:functional-168323 testdata/build --alsologtostderr:
I1005 20:10:40.938551  548022 out.go:296] Setting OutFile to fd 1 ...
I1005 20:10:40.938668  548022 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:10:40.938676  548022 out.go:309] Setting ErrFile to fd 2...
I1005 20:10:40.938681  548022 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:10:40.938904  548022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-491115/.minikube/bin
I1005 20:10:40.939526  548022 config.go:182] Loaded profile config "functional-168323": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1005 20:10:40.940120  548022 config.go:182] Loaded profile config "functional-168323": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1005 20:10:40.940563  548022 cli_runner.go:164] Run: docker container inspect functional-168323 --format={{.State.Status}}
I1005 20:10:40.958105  548022 ssh_runner.go:195] Run: systemctl --version
I1005 20:10:40.958161  548022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-168323
I1005 20:10:40.976433  548022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/functional-168323/id_rsa Username:docker}
I1005 20:10:41.073565  548022 build_images.go:151] Building image from path: /tmp/build.363812713.tar
I1005 20:10:41.073625  548022 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1005 20:10:41.082076  548022 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.363812713.tar
I1005 20:10:41.085341  548022 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.363812713.tar: stat -c "%s %y" /var/lib/minikube/build/build.363812713.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.363812713.tar': No such file or directory
I1005 20:10:41.085384  548022 ssh_runner.go:362] scp /tmp/build.363812713.tar --> /var/lib/minikube/build/build.363812713.tar (3072 bytes)
I1005 20:10:41.108561  548022 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.363812713
I1005 20:10:41.116815  548022 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.363812713 -xf /var/lib/minikube/build/build.363812713.tar
I1005 20:10:41.125366  548022 docker.go:340] Building image: /var/lib/minikube/build/build.363812713
I1005 20:10:41.125435  548022 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-168323 /var/lib/minikube/build/build.363812713
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.5s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:0a59c65ac4af655ed8cca1f2b99af5c39d14a9ec0d339e4c40bddf7444e0b6af done
#8 naming to localhost/my-image:functional-168323 done
#8 DONE 0.0s
I1005 20:10:42.468825  548022 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-168323 /var/lib/minikube/build/build.363812713: (1.343364246s)
I1005 20:10:42.468892  548022 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.363812713
I1005 20:10:42.477717  548022 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.363812713.tar
I1005 20:10:42.486498  548022 build_images.go:207] Built localhost/my-image:functional-168323 from /tmp/build.363812713.tar
I1005 20:10:42.486530  548022 build_images.go:123] succeeded building to: functional-168323
I1005 20:10:42.486535  548022 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-168323
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image load --daemon gcr.io/google-containers/addon-resizer:functional-168323 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-168323 image load --daemon gcr.io/google-containers/addon-resizer:functional-168323 --alsologtostderr: (3.018855904s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image load --daemon gcr.io/google-containers/addon-resizer:functional-168323 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-168323 image load --daemon gcr.io/google-containers/addon-resizer:functional-168323 --alsologtostderr: (2.411697042s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
2023/10/05 20:10:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-168323
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image load --daemon gcr.io/google-containers/addon-resizer:functional-168323 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-168323 image load --daemon gcr.io/google-containers/addon-resizer:functional-168323 --alsologtostderr: (2.810010997s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image save gcr.io/google-containers/addon-resizer:functional-168323 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image rm gcr.io/google-containers/addon-resizer:functional-168323 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-168323
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-168323 image save --daemon gcr.io/google-containers/addon-resizer:functional-168323 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-168323
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-168323
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-168323
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-168323
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-058482 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-058482 --driver=docker  --container-runtime=docker: (21.990488244s)
--- PASS: TestImageBuild/serial/Setup (21.99s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.15s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-058482
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-058482: (1.146059317s)
--- PASS: TestImageBuild/serial/NormalBuild (1.15s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-058482
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.82s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.6s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-058482
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.60s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.57s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-058482
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.57s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (60.75s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-111068 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1005 20:11:13.337038  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-111068 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (1m0.752721933s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (60.75s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.9s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-111068 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-111068 addons enable ingress --alsologtostderr -v=5: (9.895340389s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.90s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-111068 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (39.15s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:205: (dbg) Run:  kubectl --context ingress-addon-legacy-111068 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1005 20:12:35.257354  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
addons_test.go:205: (dbg) Done: kubectl --context ingress-addon-legacy-111068 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.162222697s)
addons_test.go:230: (dbg) Run:  kubectl --context ingress-addon-legacy-111068 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context ingress-addon-legacy-111068 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f580af27-fd28-458d-9e83-a98c25a204e1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f580af27-fd28-458d-9e83-a98c25a204e1] Running
addons_test.go:248: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.008240855s
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-111068 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context ingress-addon-legacy-111068 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-111068 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-111068 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-111068 addons disable ingress-dns --alsologtostderr -v=1: (6.537369458s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-111068 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-111068 addons disable ingress --alsologtostderr -v=1: (7.350175285s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (39.15s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.53s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-068388 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-068388 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (40.530201201s)
--- PASS: TestJSONOutput/start/Command (40.53s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-068388 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-068388 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.93s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-068388 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-068388 --output=json --user=testUser: (10.924970865s)
--- PASS: TestJSONOutput/stop/Command (10.93s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-170139 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-170139 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.777643ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"927cc9d0-ac51-45ee-bbd3-1ec238b5abb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-170139] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4b69cc3c-0eab-42c2-8ff5-56c6836e6625","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17363"}}
	{"specversion":"1.0","id":"b8cfa99d-991c-458a-a880-276ebb407db6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"732fa3bf-026b-4456-9c0f-c7caa4ccc7a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig"}}
	{"specversion":"1.0","id":"e96a3885-3a3b-4ff7-9e19-35f5f23e5787","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube"}}
	{"specversion":"1.0","id":"95420e2c-22bd-47ce-908d-d048bde9f9ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0d556ffa-a892-46a3-9d04-b7ef944d63ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8c3604ef-a010-4f1a-a07e-7e59d66f156f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-170139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-170139
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.28s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-860904 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-860904 --network=: (25.159723578s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-860904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-860904
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-860904: (2.10177501s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.28s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.23s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-727751 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-727751 --network=bridge: (22.325187262s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-727751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-727751
E1005 20:14:51.412633  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-727751: (1.889889386s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.23s)

                                                
                                    
x
+
TestKicExistingNetwork (24.42s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-863714 --network=existing-network
E1005 20:14:52.191828  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 20:14:52.197201  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 20:14:52.207561  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 20:14:52.227965  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 20:14:52.268400  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 20:14:52.348747  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 20:14:52.509174  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 20:14:52.829757  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 20:14:53.469943  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 20:14:54.750588  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 20:14:57.312433  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 20:15:02.433085  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 20:15:12.673891  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-863714 --network=existing-network: (22.435793032s)
helpers_test.go:175: Cleaning up "existing-network-863714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-863714
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-863714: (1.845091069s)
--- PASS: TestKicExistingNetwork (24.42s)

                                                
                                    
x
+
TestKicCustomSubnet (27.1s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-492974 --subnet=192.168.60.0/24
E1005 20:15:19.097699  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
E1005 20:15:33.154875  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-492974 --subnet=192.168.60.0/24: (25.049408152s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-492974 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-492974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-492974
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-492974: (2.033741578s)
--- PASS: TestKicCustomSubnet (27.10s)

                                                
                                    
x
+
TestKicStaticIP (26.9s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-231747 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-231747 --static-ip=192.168.200.200: (24.695752696s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-231747 ip
helpers_test.go:175: Cleaning up "static-ip-231747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-231747
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-231747: (2.081387003s)
--- PASS: TestKicStaticIP (26.90s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (57.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-874922 --driver=docker  --container-runtime=docker
E1005 20:16:14.115407  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-874922 --driver=docker  --container-runtime=docker: (26.446943887s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-877332 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-877332 --driver=docker  --container-runtime=docker: (25.59400681s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-874922
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-877332
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-877332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-877332
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-877332: (2.077317708s)
helpers_test.go:175: Cleaning up "first-874922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-874922
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-874922: (2.09051328s)
--- PASS: TestMinikubeProfile (57.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-534667 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-534667 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.826845682s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-534667 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-554226 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-554226 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.05846771s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-554226 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-534667 --alsologtostderr -v=5
E1005 20:17:24.173708  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-534667 --alsologtostderr -v=5: (1.468566283s)
--- PASS: TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-554226 ssh -- ls /minikube-host
E1005 20:17:24.179502  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
E1005 20:17:24.189789  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
E1005 20:17:24.210130  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
E1005 20:17:24.250409  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
E1005 20:17:24.330769  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-554226
E1005 20:17:24.490901  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
E1005 20:17:24.811453  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
E1005 20:17:25.452423  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-554226: (1.185415418s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.23s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-554226
E1005 20:17:26.733072  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
E1005 20:17:29.293882  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-554226: (6.234424463s)
--- PASS: TestMountStart/serial/RestartStopped (7.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-554226 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-630958 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E1005 20:17:36.036310  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 20:17:44.656137  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
E1005 20:18:05.136880  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-630958 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m8.707640067s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630958 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630958 -- rollout status deployment/busybox
E1005 20:18:46.098325  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-630958 -- rollout status deployment/busybox: (2.319390918s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630958 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630958 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630958 -- exec busybox-5bc68d56bd-7kg5z -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630958 -- exec busybox-5bc68d56bd-jvbq8 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630958 -- exec busybox-5bc68d56bd-7kg5z -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630958 -- exec busybox-5bc68d56bd-jvbq8 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630958 -- exec busybox-5bc68d56bd-7kg5z -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630958 -- exec busybox-5bc68d56bd-jvbq8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.84s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630958 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630958 -- exec busybox-5bc68d56bd-7kg5z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630958 -- exec busybox-5bc68d56bd-7kg5z -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630958 -- exec busybox-5bc68d56bd-jvbq8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630958 -- exec busybox-5bc68d56bd-jvbq8 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-630958 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-630958 -v 3 --alsologtostderr: (17.503524718s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 cp testdata/cp-test.txt multinode-630958:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 cp multinode-630958:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4266856774/001/cp-test_multinode-630958.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 cp multinode-630958:/home/docker/cp-test.txt multinode-630958-m02:/home/docker/cp-test_multinode-630958_multinode-630958-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958-m02 "sudo cat /home/docker/cp-test_multinode-630958_multinode-630958-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 cp multinode-630958:/home/docker/cp-test.txt multinode-630958-m03:/home/docker/cp-test_multinode-630958_multinode-630958-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958-m03 "sudo cat /home/docker/cp-test_multinode-630958_multinode-630958-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 cp testdata/cp-test.txt multinode-630958-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 cp multinode-630958-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4266856774/001/cp-test_multinode-630958-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 cp multinode-630958-m02:/home/docker/cp-test.txt multinode-630958:/home/docker/cp-test_multinode-630958-m02_multinode-630958.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958 "sudo cat /home/docker/cp-test_multinode-630958-m02_multinode-630958.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 cp multinode-630958-m02:/home/docker/cp-test.txt multinode-630958-m03:/home/docker/cp-test_multinode-630958-m02_multinode-630958-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958-m03 "sudo cat /home/docker/cp-test_multinode-630958-m02_multinode-630958-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 cp testdata/cp-test.txt multinode-630958-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 cp multinode-630958-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4266856774/001/cp-test_multinode-630958-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 cp multinode-630958-m03:/home/docker/cp-test.txt multinode-630958:/home/docker/cp-test_multinode-630958-m03_multinode-630958.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958 "sudo cat /home/docker/cp-test_multinode-630958-m03_multinode-630958.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 cp multinode-630958-m03:/home/docker/cp-test.txt multinode-630958-m02:/home/docker/cp-test_multinode-630958-m03_multinode-630958-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 ssh -n multinode-630958-m02 "sudo cat /home/docker/cp-test_multinode-630958-m03_multinode-630958-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-630958 node stop m03: (1.201158409s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-630958 status: exit status 7 (479.458659ms)

                                                
                                                
-- stdout --
	multinode-630958
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-630958-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-630958-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-630958 status --alsologtostderr: exit status 7 (463.836453ms)

                                                
                                                
-- stdout --
	multinode-630958
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-630958-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-630958-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:19:17.768030  623119 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:19:17.768331  623119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:19:17.768342  623119 out.go:309] Setting ErrFile to fd 2...
	I1005 20:19:17.768347  623119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:19:17.768598  623119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-491115/.minikube/bin
	I1005 20:19:17.768838  623119 out.go:303] Setting JSON to false
	I1005 20:19:17.768894  623119 mustload.go:65] Loading cluster: multinode-630958
	I1005 20:19:17.768987  623119 notify.go:220] Checking for updates...
	I1005 20:19:17.769442  623119 config.go:182] Loaded profile config "multinode-630958": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1005 20:19:17.769459  623119 status.go:255] checking status of multinode-630958 ...
	I1005 20:19:17.769959  623119 cli_runner.go:164] Run: docker container inspect multinode-630958 --format={{.State.Status}}
	I1005 20:19:17.788563  623119 status.go:330] multinode-630958 host status = "Running" (err=<nil>)
	I1005 20:19:17.788601  623119 host.go:66] Checking if "multinode-630958" exists ...
	I1005 20:19:17.788893  623119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-630958
	I1005 20:19:17.805743  623119 host.go:66] Checking if "multinode-630958" exists ...
	I1005 20:19:17.806030  623119 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 20:19:17.806084  623119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-630958
	I1005 20:19:17.823023  623119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/multinode-630958/id_rsa Username:docker}
	I1005 20:19:17.914136  623119 ssh_runner.go:195] Run: systemctl --version
	I1005 20:19:17.917925  623119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:19:17.928152  623119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 20:19:17.984372  623119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2023-10-05 20:19:17.975359217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1005 20:19:17.984956  623119 kubeconfig.go:92] found "multinode-630958" server: "https://192.168.58.2:8443"
	I1005 20:19:17.984981  623119 api_server.go:166] Checking apiserver status ...
	I1005 20:19:17.985022  623119 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:19:17.995897  623119 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2325/cgroup
	I1005 20:19:18.004406  623119 api_server.go:182] apiserver freezer: "9:freezer:/docker/da2a214b5882c63d273ef0f0b068fcf0735054f7920d13dd078124000485443a/kubepods/burstable/podd94b0d7a50076f3b982b72e71cf4bfa8/c4d3f03ff6001b95e616304b420b0751e3340da1723c6b12532bc52aaabc0a89"
	I1005 20:19:18.004463  623119 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/da2a214b5882c63d273ef0f0b068fcf0735054f7920d13dd078124000485443a/kubepods/burstable/podd94b0d7a50076f3b982b72e71cf4bfa8/c4d3f03ff6001b95e616304b420b0751e3340da1723c6b12532bc52aaabc0a89/freezer.state
	I1005 20:19:18.012711  623119 api_server.go:204] freezer state: "THAWED"
	I1005 20:19:18.012763  623119 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1005 20:19:18.017689  623119 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1005 20:19:18.017721  623119 status.go:421] multinode-630958 apiserver status = Running (err=<nil>)
	I1005 20:19:18.017731  623119 status.go:257] multinode-630958 status: &{Name:multinode-630958 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1005 20:19:18.017748  623119 status.go:255] checking status of multinode-630958-m02 ...
	I1005 20:19:18.017972  623119 cli_runner.go:164] Run: docker container inspect multinode-630958-m02 --format={{.State.Status}}
	I1005 20:19:18.034897  623119 status.go:330] multinode-630958-m02 host status = "Running" (err=<nil>)
	I1005 20:19:18.034929  623119 host.go:66] Checking if "multinode-630958-m02" exists ...
	I1005 20:19:18.035164  623119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-630958-m02
	I1005 20:19:18.052279  623119 host.go:66] Checking if "multinode-630958-m02" exists ...
	I1005 20:19:18.052541  623119 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 20:19:18.052578  623119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-630958-m02
	I1005 20:19:18.069057  623119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/17363-491115/.minikube/machines/multinode-630958-m02/id_rsa Username:docker}
	I1005 20:19:18.162262  623119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:19:18.172994  623119 status.go:257] multinode-630958-m02 status: &{Name:multinode-630958-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1005 20:19:18.173033  623119 status.go:255] checking status of multinode-630958-m03 ...
	I1005 20:19:18.173378  623119 cli_runner.go:164] Run: docker container inspect multinode-630958-m03 --format={{.State.Status}}
	I1005 20:19:18.190286  623119 status.go:330] multinode-630958-m03 host status = "Stopped" (err=<nil>)
	I1005 20:19:18.190318  623119 status.go:343] host is not running, skipping remaining checks
	I1005 20:19:18.190328  623119 status.go:257] multinode-630958-m03 status: &{Name:multinode-630958-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-630958 node start m03 --alsologtostderr: (11.422065182s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (111.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-630958
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-630958
E1005 20:19:51.413389  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
E1005 20:19:52.191570  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-630958: (22.433758179s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-630958 --wait=true -v=8 --alsologtostderr
E1005 20:20:08.019279  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
E1005 20:20:19.877427  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-630958 --wait=true -v=8 --alsologtostderr: (1m29.425365488s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-630958
--- PASS: TestMultiNode/serial/RestartKeepsNodes (111.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-630958 node delete m03: (4.143658411s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-630958 stop: (21.409846966s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-630958 status: exit status 7 (82.958159ms)

                                                
                                                
-- stdout --
	multinode-630958
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-630958-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-630958 status --alsologtostderr: exit status 7 (77.754677ms)

                                                
                                                
-- stdout --
	multinode-630958
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-630958-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:21:48.504094  640807 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:21:48.504392  640807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:21:48.504402  640807 out.go:309] Setting ErrFile to fd 2...
	I1005 20:21:48.504409  640807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:21:48.504606  640807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-491115/.minikube/bin
	I1005 20:21:48.504784  640807 out.go:303] Setting JSON to false
	I1005 20:21:48.504829  640807 mustload.go:65] Loading cluster: multinode-630958
	I1005 20:21:48.504934  640807 notify.go:220] Checking for updates...
	I1005 20:21:48.505294  640807 config.go:182] Loaded profile config "multinode-630958": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1005 20:21:48.505314  640807 status.go:255] checking status of multinode-630958 ...
	I1005 20:21:48.505780  640807 cli_runner.go:164] Run: docker container inspect multinode-630958 --format={{.State.Status}}
	I1005 20:21:48.524169  640807 status.go:330] multinode-630958 host status = "Stopped" (err=<nil>)
	I1005 20:21:48.524192  640807 status.go:343] host is not running, skipping remaining checks
	I1005 20:21:48.524198  640807 status.go:257] multinode-630958 status: &{Name:multinode-630958 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1005 20:21:48.524242  640807 status.go:255] checking status of multinode-630958-m02 ...
	I1005 20:21:48.524504  640807 cli_runner.go:164] Run: docker container inspect multinode-630958-m02 --format={{.State.Status}}
	I1005 20:21:48.541935  640807 status.go:330] multinode-630958-m02 host status = "Stopped" (err=<nil>)
	I1005 20:21:48.541956  640807 status.go:343] host is not running, skipping remaining checks
	I1005 20:21:48.541962  640807 status.go:257] multinode-630958-m02 status: &{Name:multinode-630958-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (84.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-630958 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E1005 20:22:24.173106  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
E1005 20:22:51.859676  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-630958 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m24.089170046s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630958 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (84.71s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-630958
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-630958-m02 --driver=docker  --container-runtime=docker
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-630958-m02 --driver=docker  --container-runtime=docker: exit status 14 (63.857497ms)

                                                
                                                
-- stdout --
	* [multinode-630958-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-630958-m02' is duplicated with machine name 'multinode-630958-m02' in profile 'multinode-630958'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-630958-m03 --driver=docker  --container-runtime=docker
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-630958-m03 --driver=docker  --container-runtime=docker: (24.746038344s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-630958
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-630958: exit status 80 (277.034805ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-630958
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-630958-m03 already exists in multinode-630958-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-630958-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-630958-m03: (2.059792834s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.19s)

                                                
                                    
x
+
TestPreload (114.55s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-443122 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-443122 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (55.512455912s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-443122 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-443122
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-443122: (10.66742552s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-443122 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E1005 20:24:51.412754  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
E1005 20:24:52.191839  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-443122 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (45.294786187s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-443122 image list
helpers_test.go:175: Cleaning up "test-preload-443122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-443122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-443122: (2.126624028s)
--- PASS: TestPreload (114.55s)

                                                
                                    
x
+
TestScheduledStopUnix (98.72s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-622955 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-622955 --memory=2048 --driver=docker  --container-runtime=docker: (25.863204792s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-622955 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-622955 -n scheduled-stop-622955
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-622955 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-622955 --cancel-scheduled
E1005 20:26:14.458427  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-622955 -n scheduled-stop-622955
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-622955
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-622955 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-622955
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-622955: exit status 7 (62.085919ms)

                                                
                                                
-- stdout --
	scheduled-stop-622955
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-622955 -n scheduled-stop-622955
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-622955 -n scheduled-stop-622955: exit status 7 (61.457192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-622955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-622955
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-622955: (1.600395288s)
--- PASS: TestScheduledStopUnix (98.72s)

                                                
                                    
x
+
TestSkaffold (99.08s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe4172020652 version
skaffold_test.go:63: skaffold version: v2.8.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-440533 --memory=2600 --driver=docker  --container-runtime=docker
E1005 20:27:24.173859  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-440533 --memory=2600 --driver=docker  --container-runtime=docker: (23.110021421s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe4172020652 run --minikube-profile skaffold-440533 --kube-context skaffold-440533 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe4172020652 run --minikube-profile skaffold-440533 --kube-context skaffold-440533 --status-check=true --port-forward=false --interactive=false: (1m2.577959037s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-bf9f6558b-cwxq9" [57be0db0-0a30-4633-9466-f98e22470e72] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.014616019s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7fb6dcd66b-2jfbl" [d35b6e91-977e-419c-925b-335be4f060c9] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.009274369s
helpers_test.go:175: Cleaning up "skaffold-440533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-440533
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-440533: (2.7051914s)
--- PASS: TestSkaffold (99.08s)

                                                
                                    
x
+
TestInsufficientStorage (13.28s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-926714 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-926714 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (11.095366347s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ba468c03-e4ef-47cc-a823-7517775e5d4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-926714] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"af1f6d1b-0a82-43b6-9263-e2e114dd98bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17363"}}
	{"specversion":"1.0","id":"8defe9f7-a390-4f48-a5c0-be51ae48d515","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6c4e0d0e-5062-401d-ac72-a74aebd8c13c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig"}}
	{"specversion":"1.0","id":"77afd165-d260-4e50-aa40-87da46587c6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube"}}
	{"specversion":"1.0","id":"7b3e64ea-0348-4df7-b002-0fe6ebb23ae2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5e22da4e-fac4-45cd-8a58-c08c0c3b7613","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f4300ce3-7ea9-4520-9e20-bb35f18d8f55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ce75b0ff-5bfd-4e9d-965b-450f1f0a5fce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a5ca3a24-2cf3-405c-aa8c-45b5ae076988","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"05226361-f9a5-4fa4-b8c7-2bbb4fc1d75b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c8b30ca9-9370-4c9a-a178-edc43ac40450","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-926714 in cluster insufficient-storage-926714","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"12ff0298-9fa4-49ed-89a9-563eefecd608","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb91609a-ed3f-456a-87be-5ab552315032","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1dbfb545-fdbd-4e82-8392-778209e054a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-926714 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-926714 --output=json --layout=cluster: exit status 7 (259.24093ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-926714","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-926714","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1005 20:29:08.054962  682383 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-926714" does not appear in /home/jenkins/minikube-integration/17363-491115/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-926714 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-926714 --output=json --layout=cluster: exit status 7 (256.191114ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-926714","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-926714","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1005 20:29:08.311221  682470 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-926714" does not appear in /home/jenkins/minikube-integration/17363-491115/kubeconfig
	E1005 20:29:08.321347  682470 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/insufficient-storage-926714/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-926714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-926714
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-926714: (1.664979559s)
--- PASS: TestInsufficientStorage (13.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (85.34s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.1005395052.exe start -p running-upgrade-072066 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.1005395052.exe start -p running-upgrade-072066 --memory=2200 --vm-driver=docker  --container-runtime=docker: (57.230866502s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-072066 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-072066 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (25.827515769s)
helpers_test.go:175: Cleaning up "running-upgrade-072066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-072066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-072066: (1.720858109s)
--- PASS: TestRunningBinaryUpgrade (85.34s)

                                                
                                    
x
+
TestKubernetesUpgrade (188.72s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-657856 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-657856 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (52.977109852s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-657856
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-657856: (1.238701753s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-657856 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-657856 status --format={{.Host}}: exit status 7 (67.008441ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-657856 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-657856 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m32.835440944s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-657856 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-657856 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-657856 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (100.095327ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-657856] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-657856
	    minikube start -p kubernetes-upgrade-657856 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6578562 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-657856 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-657856 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1005 20:33:43.973659  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
E1005 20:33:43.978912  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
E1005 20:33:43.989222  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
E1005 20:33:44.009532  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
E1005 20:33:44.050177  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
E1005 20:33:44.130749  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
E1005 20:33:44.291497  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
E1005 20:33:44.611958  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-657856 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (38.942263933s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-657856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-657856
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-657856: (2.492122549s)
--- PASS: TestKubernetesUpgrade (188.72s)

                                                
                                    
x
+
TestMissingContainerUpgrade (120.73s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.3795199847.exe start -p missing-upgrade-929622 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.3795199847.exe start -p missing-upgrade-929622 --memory=2200 --driver=docker  --container-runtime=docker: (54.943425774s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-929622
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-929622: (10.329077616s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-929622
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-929622 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-929622 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (51.983007637s)
helpers_test.go:175: Cleaning up "missing-upgrade-929622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-929622
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-929622: (3.015422597s)
--- PASS: TestMissingContainerUpgrade (120.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-920360 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-920360 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (67.244144ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-920360] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-491115/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-491115/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-920360 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-920360 --driver=docker  --container-runtime=docker: (41.841476607s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-920360 status -o json
E1005 20:29:52.191421  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (68.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.2910803101.exe start -p stopped-upgrade-986954 --memory=2200 --vm-driver=docker  --container-runtime=docker
E1005 20:29:51.412945  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.2910803101.exe start -p stopped-upgrade-986954 --memory=2200 --vm-driver=docker  --container-runtime=docker: (43.863655197s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.2910803101.exe -p stopped-upgrade-986954 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.2910803101.exe -p stopped-upgrade-986954 stop: (2.334383289s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-986954 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-986954 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (22.119799017s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (68.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-920360 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-920360 --no-kubernetes --driver=docker  --container-runtime=docker: (14.860312588s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-920360 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-920360 status -o json: exit status 2 (286.883844ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-920360","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-920360
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-920360: (1.743615427s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-920360 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-920360 --no-kubernetes --driver=docker  --container-runtime=docker: (8.360881085s)
--- PASS: TestNoKubernetes/serial/Start (8.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-920360 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-920360 "sudo systemctl is-active --quiet service kubelet": exit status 1 (255.136762ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-986954
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-986954: (1.43790366s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-920360
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-920360: (1.215092332s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-920360 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-920360 --driver=docker  --container-runtime=docker: (8.295716203s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-920360 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-920360 "sudo systemctl is-active --quiet service kubelet": exit status 1 (340.398007ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestPause/serial/Start (50.27s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-898249 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-898249 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (50.266422017s)
--- PASS: TestPause/serial/Start (50.27s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (37.15s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-898249 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-898249 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (37.128649665s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (37.15s)

                                                
                                    
x
+
TestPause/serial/Pause (0.54s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-898249 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.54s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-898249 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-898249 --output=json --layout=cluster: exit status 2 (290.336953ms)

                                                
                                                
-- stdout --
	{"Name":"pause-898249","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-898249","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.49s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-898249 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.49s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.72s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-898249 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.72s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.2s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-898249 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-898249 --alsologtostderr -v=5: (2.196832908s)
--- PASS: TestPause/serial/DeletePaused (2.20s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.58s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.518875103s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-898249
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-898249: exit status 1 (17.68075ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-898249: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (52.643109777s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (42.07394271s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-264029 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-264029 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E1005 20:33:45.252225  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-lhcpn" [e65df386-a8d3-41a6-b53d-31dd4e79a4ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1005 20:33:46.532901  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
E1005 20:33:47.220084  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
E1005 20:33:49.093407  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-lhcpn" [e65df386-a8d3-41a6-b53d-31dd4e79a4ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.013680435s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-8k75l" [0075b9ff-a72f-45b8-bdb2-987b7d28b5a2] Running
E1005 20:33:54.213821  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.02172868s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-264029 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-264029 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-264029 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4pdvb" [f84e6485-ac8a-40c0-a5c6-dd46590205d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4pdvb" [f84e6485-ac8a-40c0-a5c6-dd46590205d8] Running
E1005 20:34:04.454162  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.01176985s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m12.654237699s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-264029 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E1005 20:34:24.934710  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (58.641586374s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (73.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m13.552419875s)
--- PASS: TestNetworkPlugins/group/false/Start (73.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (39.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E1005 20:35:05.895636  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (39.041532908s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (39.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ntjgb" [1b248b1c-8829-4b79-be36-aea933b05da2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.020603168s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-264029 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-264029 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q47j4" [1929d208-88f5-4ca7-bd3f-59eb7bacbfa5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q47j4" [1929d208-88f5-4ca7-bd3f-59eb7bacbfa5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.010069882s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-264029 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-264029 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8zn6w" [ed48af69-c50d-4649-8f02-fc7fe87fb50b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8zn6w" [ed48af69-c50d-4649-8f02-fc7fe87fb50b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.010422616s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-264029 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-264029 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-264029 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-264029 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dvkvf" [6cb4e88a-0b71-4681-9fba-17d2dae01bbd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dvkvf" [6cb4e88a-0b71-4681-9fba-17d2dae01bbd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.013885094s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-264029 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-264029 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gdfs4" [c8a326c0-57c3-44f4-946f-1f0077788005] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gdfs4" [c8a326c0-57c3-44f4-946f-1f0077788005] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.010772063s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (41.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (41.629566894s)
--- PASS: TestNetworkPlugins/group/flannel/Start (41.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (81.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m21.342958399s)
--- PASS: TestNetworkPlugins/group/bridge/Start (81.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-264029 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (33.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-264029 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-264029 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.195280491s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-264029 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-264029 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.160323026s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-264029 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (33.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (80.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-264029 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m20.457419362s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (80.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (8.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kkg9q" [a693e1c6-2484-464d-b4fe-b9b907f49eb3] Pending: Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni]) / Ready:ContainersNotReady (containers with unready status: [kube-flannel]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-flannel])
helpers_test.go:344: "kube-flannel-ds-kkg9q" [a693e1c6-2484-464d-b4fe-b9b907f49eb3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-flannel]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-flannel])
helpers_test.go:344: "kube-flannel-ds-kkg9q" [a693e1c6-2484-464d-b4fe-b9b907f49eb3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 8.03014334s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (8.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-264029 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-264029 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gcbm6" [2ae5dfc1-961f-4301-925c-de0e950906c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gcbm6" [2ae5dfc1-961f-4301-925c-de0e950906c6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.0126421s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-264029 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (87.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-477708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-477708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2: (1m27.819687089s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (87.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-264029 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-264029 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-d7ptt" [20580263-758b-45c5-ab74-efedac838e07] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-d7ptt" [20580263-758b-45c5-ab74-efedac838e07] Running
E1005 20:37:24.174016  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.011585971s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-264029 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-264029 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-264029 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g7lvl" [6585f708-1a41-4a44-b40e-b4828544baa5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-g7lvl" [6585f708-1a41-4a44-b40e-b4828544baa5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.021751909s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-411409 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-411409 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2: (41.208364753s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-264029 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-264029 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-973002 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-973002 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2: (43.649245999s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-411409 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8c99ae69-3713-4c9b-b76b-c784f15b924a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8c99ae69-3713-4c9b-b76b-c784f15b924a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.017696155s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-411409 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-411409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-411409 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-411409 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-411409 --alsologtostderr -v=3: (10.783288498s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-477708 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c67685d5-8727-4590-91d6-e734ca334c8f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c67685d5-8727-4590-91d6-e734ca334c8f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.01413923s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-477708 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-477708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1005 20:38:43.973627  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-477708 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-477708 --alsologtostderr -v=3
E1005 20:38:45.238161  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:38:45.243522  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:38:45.253831  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:38:45.274164  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:38:45.314546  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:38:45.394885  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:38:45.555352  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:38:45.875554  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-477708 --alsologtostderr -v=3: (10.710204246s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-411409 -n embed-certs-411409
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-411409 -n embed-certs-411409: exit status 7 (139.143622ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-411409 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (312.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-411409 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2
E1005 20:38:46.516357  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:38:47.796619  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:38:50.219501  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
E1005 20:38:50.224815  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
E1005 20:38:50.235153  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
E1005 20:38:50.255451  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
E1005 20:38:50.295771  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
E1005 20:38:50.357327  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:38:50.376060  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
E1005 20:38:50.536959  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-411409 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2: (5m12.107221431s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-411409 -n embed-certs-411409
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (312.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-973002 create -f testdata/busybox.yaml
E1005 20:38:50.858069  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7651d1dd-704f-414e-bb04-2cb92a8d64cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1005 20:38:51.498353  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
E1005 20:38:52.779182  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
helpers_test.go:344: "busybox" [7651d1dd-704f-414e-bb04-2cb92a8d64cf] Running
E1005 20:38:55.339735  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
E1005 20:38:55.478046  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.014797323s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-973002 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-477708 -n no-preload-477708
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-477708 -n no-preload-477708: exit status 7 (62.866171ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-477708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (338.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-477708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-477708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2: (5m37.824213665s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-477708 -n no-preload-477708
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (338.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-973002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-973002 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-973002 --alsologtostderr -v=3
E1005 20:39:00.460840  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
E1005 20:39:05.719019  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:39:10.701140  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-973002 --alsologtostderr -v=3: (10.946070451s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-973002 -n default-k8s-diff-port-973002
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-973002 -n default-k8s-diff-port-973002: exit status 7 (181.202907ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-973002 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (314.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-973002 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2
E1005 20:39:11.656585  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
E1005 20:39:26.199627  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:39:31.181851  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
E1005 20:39:51.411963  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
E1005 20:39:52.190853  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 20:40:07.160827  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:40:12.142669  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
E1005 20:40:13.021041  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:40:13.026305  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:40:13.036645  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:40:13.057118  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:40:13.097450  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:40:13.177829  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:40:13.338260  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:40:13.658697  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:40:14.298902  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:40:14.451273  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:40:14.456593  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:40:14.466924  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:40:14.487245  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:40:14.527556  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:40:14.607905  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:40:14.768051  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:40:15.088652  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:40:15.579616  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:40:15.729348  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:40:17.010224  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:40:18.140649  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:40:19.570771  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:40:23.260874  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:40:24.691043  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:40:33.501727  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:40:34.931499  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:40:42.815970  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:40:42.821314  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:40:42.831614  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:40:42.851942  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:40:42.892290  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:40:42.972668  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:40:43.133689  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:40:43.454124  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:40:43.872830  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:40:43.878103  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:40:43.888431  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:40:43.908739  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:40:43.949047  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:40:44.029537  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:40:44.095151  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:40:44.190657  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:40:44.511271  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:40:45.152101  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:40:45.375668  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:40:46.432877  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:40:47.936347  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:40:48.993683  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:40:53.056993  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:40:53.982045  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:40:54.114337  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:40:55.412425  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:41:03.297168  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:41:04.355342  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:41:23.778309  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:41:24.836132  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:41:26.598730  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:41:26.604008  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:41:26.614285  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:41:26.634601  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:41:26.674931  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:41:26.755261  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:41:26.915687  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:41:27.236248  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:41:27.876546  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:41:29.081409  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:41:29.157637  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:41:31.717863  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:41:34.063896  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
E1005 20:41:34.942414  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:41:36.373294  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:41:36.838851  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:41:47.080018  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:42:04.739316  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:42:05.797064  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:42:07.561016  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:42:12.585618  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:42:12.590940  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:42:12.601263  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:42:12.621602  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:42:12.661898  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:42:12.742244  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:42:12.902691  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:42:13.223091  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:42:13.864126  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:42:15.144934  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:42:17.705670  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:42:22.826788  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:42:24.173514  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
E1005 20:42:33.067202  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:42:35.778533  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:42:35.783832  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:42:35.794176  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:42:35.814535  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:42:35.854832  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:42:35.935165  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:42:36.095597  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:42:36.415956  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:42:37.056960  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:42:38.337383  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:42:40.897710  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:42:46.017899  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:42:48.521353  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:42:53.547462  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:42:54.459387  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
E1005 20:42:56.259053  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:42:56.863085  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:42:58.293991  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:43:16.739584  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:43:26.660211  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:43:27.717991  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:43:34.508187  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:43:43.973684  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
E1005 20:43:45.237561  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:43:50.219364  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
E1005 20:43:57.700615  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-973002 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2: (5m13.67043578s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-973002 -n default-k8s-diff-port-973002
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (314.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-79fwf" [eba7c0ea-a8a9-430e-abe7-a9dbb35d1b1d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-79fwf" [eba7c0ea-a8a9-430e-abe7-a9dbb35d1b1d] Running
E1005 20:44:10.441713  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.01824995s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-79fwf" [eba7c0ea-a8a9-430e-abe7-a9dbb35d1b1d] Running
E1005 20:44:12.922549  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011295262s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-411409 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-411409 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-411409 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-411409 -n embed-certs-411409
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-411409 -n embed-certs-411409: exit status 2 (330.173183ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-411409 -n embed-certs-411409
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-411409 -n embed-certs-411409: exit status 2 (310.017755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-411409 --alsologtostderr -v=1
E1005 20:44:17.905097  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-411409 -n embed-certs-411409
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-411409 -n embed-certs-411409
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-251602 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-251602 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2: (40.904798697s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8wz8z" [334911ef-a8f2-4af7-9b2d-591982c5314b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8wz8z" [334911ef-a8f2-4af7-9b2d-591982c5314b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.017238138s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7mrm2" [7f6633f3-703d-43d2-bb4a-9dc88623dfae] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7mrm2" [7f6633f3-703d-43d2-bb4a-9dc88623dfae] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.024080108s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8wz8z" [334911ef-a8f2-4af7-9b2d-591982c5314b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011434774s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-973002 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-973002 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-973002 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-973002 -n default-k8s-diff-port-973002
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-973002 -n default-k8s-diff-port-973002: exit status 2 (335.459845ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-973002 -n default-k8s-diff-port-973002
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-973002 -n default-k8s-diff-port-973002: exit status 2 (320.975827ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-973002 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-973002 -n default-k8s-diff-port-973002
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-973002 -n default-k8s-diff-port-973002
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7mrm2" [7f6633f3-703d-43d2-bb4a-9dc88623dfae] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017791595s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-477708 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-477708 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-477708 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-477708 -n no-preload-477708
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-477708 -n no-preload-477708: exit status 2 (291.364385ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-477708 -n no-preload-477708
E1005 20:44:51.412850  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-477708 -n no-preload-477708: exit status 2 (312.356739ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-477708 --alsologtostderr -v=1
E1005 20:44:52.191767  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-477708 -n no-preload-477708
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-477708 -n no-preload-477708
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-251602 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-251602 --alsologtostderr -v=3
E1005 20:45:13.020657  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-251602 --alsologtostderr -v=3: (10.816068247s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-251602 -n newest-cni-251602
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-251602 -n newest-cni-251602: exit status 7 (64.039769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-251602 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-251602 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2
E1005 20:45:14.451486  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:45:19.621470  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:45:40.703799  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-251602 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.2: (26.62479385s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-251602 -n newest-cni-251602
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-251602 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-251602 --alsologtostderr -v=1
E1005 20:45:42.134858  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-251602 -n newest-cni-251602
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-251602 -n newest-cni-251602: exit status 2 (300.129249ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-251602 -n newest-cni-251602
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-251602 -n newest-cni-251602: exit status 2 (282.023506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-251602 --alsologtostderr -v=1
E1005 20:45:42.816352  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-251602 -n newest-cni-251602
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-251602 -n newest-cni-251602
E1005 20:45:43.873275  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-330869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-330869 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-330869 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-330869 --alsologtostderr -v=3: (11.872508374s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330869 -n old-k8s-version-330869
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330869 -n old-k8s-version-330869: exit status 7 (81.484354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-330869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (419.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-330869 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E1005 20:54:51.412825  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
E1005 20:54:52.190813  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 20:55:08.283286  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:55:13.021119  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:55:13.265639  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
E1005 20:55:14.451576  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:55:42.816446  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:55:43.873264  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:56:26.598477  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:56:36.064856  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 20:56:37.495333  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 20:57:05.861806  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 20:57:06.918828  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 20:57:12.585420  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:57:24.173758  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/ingress-addon-legacy-111068/client.crt: no such file or directory
E1005 20:57:35.778503  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:57:49.643711  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
E1005 20:58:35.629864  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/bridge-264029/client.crt: no such file or directory
E1005 20:58:35.802523  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/no-preload-477708/client.crt: no such file or directory
E1005 20:58:43.973694  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/skaffold-440533/client.crt: no such file or directory
E1005 20:58:45.237430  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/auto-264029/client.crt: no such file or directory
E1005 20:58:50.220179  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kindnet-264029/client.crt: no such file or directory
E1005 20:58:51.014086  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/default-k8s-diff-port-973002/client.crt: no such file or directory
E1005 20:58:58.822846  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/kubenet-264029/client.crt: no such file or directory
E1005 20:59:34.460325  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
E1005 20:59:51.411948  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/addons-432328/client.crt: no such file or directory
E1005 20:59:52.191877  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/functional-168323/client.crt: no such file or directory
E1005 21:00:13.021038  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/calico-264029/client.crt: no such file or directory
E1005 21:00:14.451435  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/custom-flannel-264029/client.crt: no such file or directory
E1005 21:00:42.815632  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/false-264029/client.crt: no such file or directory
E1005 21:00:43.872603  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/enable-default-cni-264029/client.crt: no such file or directory
E1005 21:01:26.598219  497926 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/flannel-264029/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-330869 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (6m59.330994265s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330869 -n old-k8s-version-330869
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (419.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-9fr88" [caaaf558-0512-437c-9d21-459cc069a629] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014633055s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-9fr88" [caaaf558-0512-437c-9d21-459cc069a629] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00857763s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-330869 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-330869 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-330869 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-330869 -n old-k8s-version-330869
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-330869 -n old-k8s-version-330869: exit status 2 (279.590034ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-330869 -n old-k8s-version-330869
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-330869 -n old-k8s-version-330869: exit status 2 (279.062465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-330869 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-330869 -n old-k8s-version-330869
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-330869 -n old-k8s-version-330869
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.32s)

                                                
                                    

Test skip (20/322)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:496: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-264029 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-264029

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-264029

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-264029

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-264029

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-264029

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-264029

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-264029

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-264029

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-264029

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-264029

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-264029

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-264029" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-264029" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-264029" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-264029" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-264029" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-264029" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-264029" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-264029" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-264029

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-264029

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-264029" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-264029" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-264029

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-264029

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-264029" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-264029" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-264029" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-264029" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-264029" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17363-491115/.minikube/ca.crt
server: https://127.0.0.1:33270
name: missing-upgrade-929622
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17363-491115/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Oct 2023 20:30:02 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: offline-docker-896990
contexts:
- context:
cluster: missing-upgrade-929622
user: missing-upgrade-929622
name: missing-upgrade-929622
- context:
cluster: offline-docker-896990
extensions:
- extension:
last-update: Thu, 05 Oct 2023 20:30:02 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: offline-docker-896990
name: offline-docker-896990
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-929622
user:
client-certificate: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/missing-upgrade-929622/client.crt
client-key: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/missing-upgrade-929622/client.key
- name: offline-docker-896990
user:
client-certificate: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/offline-docker-896990/client.crt
client-key: /home/jenkins/minikube-integration/17363-491115/.minikube/profiles/offline-docker-896990/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-264029

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-264029" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264029"

                                                
                                                
----------------------- debugLogs end: cilium-264029 [took: 3.163198736s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-264029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-264029
--- SKIP: TestNetworkPlugins/group/cilium (3.30s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-307240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-307240
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard