Test Report: Docker_Linux_containerd 21924

                    
                      af8f7912417d9ebc8a76a18bcb87417cd1a63b57:2025-11-19:42387
                    
                

Test fail (5/333)

Order failed test Duration
256 TestKubernetesUpgrade 595.8
350 TestStartStop/group/old-k8s-version/serial/DeployApp 14.23
351 TestStartStop/group/no-preload/serial/DeployApp 12.1
354 TestStartStop/group/embed-certs/serial/DeployApp 14.58
386 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 13.86
x
+
TestKubernetesUpgrade (595.8s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (28.594543996s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-896338
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-896338: (4.45198081s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-896338 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-896338 status --format={{.Host}}: exit status 7 (86.432106ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (29.675351354s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-896338 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (89.202915ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-896338] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-896338
	    minikube start -p kubernetes-upgrade-896338 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8963382 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-896338 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 80 (7m20.450450929s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-896338] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-896338" primary control-plane node in "kubernetes-upgrade-896338" cluster
	* Pulling base image v0.0.48-1763507788-21924 ...
	* Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:26:26.095711  208368 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:26:26.095863  208368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:26:26.095875  208368 out.go:374] Setting ErrFile to fd 2...
	I1119 02:26:26.095882  208368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:26:26.096125  208368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:26:26.113581  208368 out.go:368] Setting JSON to false
	I1119 02:26:26.115015  208368 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4126,"bootTime":1763515060,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:26:26.115148  208368 start.go:143] virtualization: kvm guest
	I1119 02:26:26.116794  208368 out.go:179] * [kubernetes-upgrade-896338] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:26:26.118392  208368 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:26:26.118393  208368 notify.go:221] Checking for updates...
	I1119 02:26:26.120772  208368 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:26:26.122416  208368 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:26:26.124418  208368 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	I1119 02:26:26.128814  208368 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:26:26.130090  208368 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:26:26.131935  208368 config.go:182] Loaded profile config "kubernetes-upgrade-896338": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:26:26.132583  208368 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:26:26.168868  208368 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:26:26.168950  208368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:26:26.251452  208368 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:26:26.240245553 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:26:26.251575  208368 docker.go:319] overlay module found
	I1119 02:26:26.253351  208368 out.go:179] * Using the docker driver based on existing profile
	I1119 02:26:26.254517  208368 start.go:309] selected driver: docker
	I1119 02:26:26.254535  208368 start.go:930] validating driver "docker" against &{Name:kubernetes-upgrade-896338 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-896338 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:26:26.254629  208368 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:26:26.255515  208368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:26:26.329891  208368 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:26:26.317174636 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:26:26.330237  208368 cni.go:84] Creating CNI manager for ""
	I1119 02:26:26.330299  208368 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:26:26.330349  208368 start.go:353] cluster config:
	{Name:kubernetes-upgrade-896338 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-896338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:26:26.332985  208368 out.go:179] * Starting "kubernetes-upgrade-896338" primary control-plane node in "kubernetes-upgrade-896338" cluster
	I1119 02:26:26.334248  208368 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 02:26:26.335658  208368 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:26:26.337047  208368 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:26:26.337086  208368 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1119 02:26:26.337095  208368 cache.go:65] Caching tarball of preloaded images
	I1119 02:26:26.337176  208368 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:26:26.337205  208368 preload.go:238] Found /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1119 02:26:26.337325  208368 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 02:26:26.337488  208368 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/config.json ...
	I1119 02:26:26.362337  208368 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:26:26.362357  208368 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:26:26.362393  208368 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:26:26.362420  208368 start.go:360] acquireMachinesLock for kubernetes-upgrade-896338: {Name:mkcc2d1156d34e99d5c80a4b60172f822d6bf4cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:26:26.362479  208368 start.go:364] duration metric: took 38.96µs to acquireMachinesLock for "kubernetes-upgrade-896338"
	I1119 02:26:26.362502  208368 start.go:96] Skipping create...Using existing machine configuration
	I1119 02:26:26.362507  208368 fix.go:54] fixHost starting: 
	I1119 02:26:26.362710  208368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-896338 --format={{.State.Status}}
	I1119 02:26:26.386432  208368 fix.go:112] recreateIfNeeded on kubernetes-upgrade-896338: state=Running err=<nil>
	W1119 02:26:26.386456  208368 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 02:26:26.388067  208368 out.go:252] * Updating the running docker "kubernetes-upgrade-896338" container ...
	I1119 02:26:26.388102  208368 machine.go:94] provisionDockerMachine start ...
	I1119 02:26:26.388168  208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
	I1119 02:26:26.411458  208368 main.go:143] libmachine: Using SSH client type: native
	I1119 02:26:26.411844  208368 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32989 <nil> <nil>}
	I1119 02:26:26.411864  208368 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:26:26.549789  208368 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-896338
	
	I1119 02:26:26.549824  208368 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-896338"
	I1119 02:26:26.549893  208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
	I1119 02:26:26.574492  208368 main.go:143] libmachine: Using SSH client type: native
	I1119 02:26:26.574788  208368 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32989 <nil> <nil>}
	I1119 02:26:26.574808  208368 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-896338 && echo "kubernetes-upgrade-896338" | sudo tee /etc/hostname
	I1119 02:26:26.721175  208368 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-896338
	
	I1119 02:26:26.721268  208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
	I1119 02:26:26.742777  208368 main.go:143] libmachine: Using SSH client type: native
	I1119 02:26:26.743043  208368 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32989 <nil> <nil>}
	I1119 02:26:26.743076  208368 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-896338' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-896338/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-896338' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:26:26.883387  208368 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:26:26.883418  208368 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11107/.minikube}
	I1119 02:26:26.883464  208368 ubuntu.go:190] setting up certificates
	I1119 02:26:26.883477  208368 provision.go:84] configureAuth start
	I1119 02:26:26.883545  208368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-896338
	I1119 02:26:26.907583  208368 provision.go:143] copyHostCerts
	I1119 02:26:26.907661  208368 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem, removing ...
	I1119 02:26:26.907683  208368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem
	I1119 02:26:26.907775  208368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem (1082 bytes)
	I1119 02:26:26.907916  208368 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem, removing ...
	I1119 02:26:26.907928  208368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem
	I1119 02:26:26.907972  208368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem (1123 bytes)
	I1119 02:26:26.908068  208368 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem, removing ...
	I1119 02:26:26.908080  208368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem
	I1119 02:26:26.908114  208368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem (1675 bytes)
	I1119 02:26:26.908213  208368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-896338 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-896338 localhost minikube]
	I1119 02:26:27.007550  208368 provision.go:177] copyRemoteCerts
	I1119 02:26:27.007602  208368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:26:27.007645  208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
	I1119 02:26:27.028966  208368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/kubernetes-upgrade-896338/id_rsa Username:docker}
	I1119 02:26:27.133845  208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:26:27.155962  208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1119 02:26:27.175670  208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:26:27.194186  208368 provision.go:87] duration metric: took 310.696225ms to configureAuth
	I1119 02:26:27.194216  208368 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:26:27.194434  208368 config.go:182] Loaded profile config "kubernetes-upgrade-896338": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:26:27.194449  208368 machine.go:97] duration metric: took 806.340026ms to provisionDockerMachine
	I1119 02:26:27.194457  208368 start.go:293] postStartSetup for "kubernetes-upgrade-896338" (driver="docker")
	I1119 02:26:27.194466  208368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:26:27.194512  208368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:26:27.194547  208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
	I1119 02:26:27.215502  208368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/kubernetes-upgrade-896338/id_rsa Username:docker}
	I1119 02:26:27.314396  208368 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:26:27.318628  208368 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:26:27.318654  208368 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:26:27.318665  208368 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/addons for local assets ...
	I1119 02:26:27.318716  208368 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/files for local assets ...
	I1119 02:26:27.318795  208368 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem -> 146572.pem in /etc/ssl/certs
	I1119 02:26:27.318890  208368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:26:27.333415  208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:26:27.354169  208368 start.go:296] duration metric: took 159.698533ms for postStartSetup
	I1119 02:26:27.354255  208368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:26:27.354300  208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
	I1119 02:26:27.376380  208368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/kubernetes-upgrade-896338/id_rsa Username:docker}
	I1119 02:26:27.476444  208368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:26:27.481683  208368 fix.go:56] duration metric: took 1.11916901s for fixHost
	I1119 02:26:27.481709  208368 start.go:83] releasing machines lock for "kubernetes-upgrade-896338", held for 1.119217915s
	I1119 02:26:27.481771  208368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-896338
	I1119 02:26:27.501873  208368 ssh_runner.go:195] Run: cat /version.json
	I1119 02:26:27.501928  208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
	I1119 02:26:27.502070  208368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:26:27.502126  208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
	I1119 02:26:27.525894  208368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/kubernetes-upgrade-896338/id_rsa Username:docker}
	I1119 02:26:27.527315  208368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/kubernetes-upgrade-896338/id_rsa Username:docker}
	I1119 02:26:27.711750  208368 ssh_runner.go:195] Run: systemctl --version
	I1119 02:26:27.719641  208368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:26:27.724548  208368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:26:27.724606  208368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:26:27.734240  208368 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 02:26:27.734265  208368 start.go:496] detecting cgroup driver to use...
	I1119 02:26:27.734296  208368 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:26:27.734338  208368 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 02:26:27.749150  208368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 02:26:27.764722  208368 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:26:27.764774  208368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:26:27.782122  208368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:26:27.795957  208368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:26:27.901664  208368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:26:28.020065  208368 docker.go:234] disabling docker service ...
	I1119 02:26:28.020126  208368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:26:28.038442  208368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:26:28.053200  208368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:26:28.193711  208368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:26:28.347018  208368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:26:28.361220  208368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:26:28.380079  208368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 02:26:28.390885  208368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 02:26:28.401695  208368 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 02:26:28.401758  208368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 02:26:28.411884  208368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:26:28.422027  208368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 02:26:28.431897  208368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:26:28.442680  208368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:26:28.452218  208368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 02:26:28.461863  208368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 02:26:28.471641  208368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 02:26:28.483087  208368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:26:28.491476  208368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:26:28.500101  208368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:26:28.614400  208368 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 02:26:28.768022  208368 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 02:26:28.768122  208368 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 02:26:28.775224  208368 start.go:564] Will wait 60s for crictl version
	I1119 02:26:28.775340  208368 ssh_runner.go:195] Run: which crictl
	I1119 02:26:28.781105  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:26:28.816521  208368 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 02:26:28.816595  208368 ssh_runner.go:195] Run: containerd --version
	I1119 02:26:28.844292  208368 ssh_runner.go:195] Run: containerd --version
	I1119 02:26:28.874792  208368 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 02:26:28.876538  208368 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-896338 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:26:28.901031  208368 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 02:26:28.907217  208368 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-896338 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-896338 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:26:28.907592  208368 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:26:28.907837  208368 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:26:28.947182  208368 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-scheduler:v1.34.1". assuming images are not preloaded.
	I1119 02:26:28.947328  208368 ssh_runner.go:195] Run: which lz4
	I1119 02:26:28.952694  208368 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1119 02:26:28.958484  208368 ssh_runner.go:356] copy: skipping /preloaded.tar.lz4 (exists)
	I1119 02:26:28.958508  208368 containerd.go:563] duration metric: took 5.861643ms to copy over tarball
	I1119 02:26:28.958566  208368 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1119 02:26:32.276678  208368 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.318087658s)
	I1119 02:26:32.276762  208368 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1119 02:26:32.276873  208368 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:26:32.306738  208368 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-scheduler:v1.34.1". assuming images are not preloaded.
	I1119 02:26:32.306763  208368 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1119 02:26:32.306933  208368 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 02:26:32.306991  208368 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 02:26:32.307043  208368 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 02:26:32.307103  208368 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:26:32.306961  208368 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1119 02:26:32.306996  208368 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1119 02:26:32.306960  208368 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 02:26:32.307545  208368 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 02:26:32.308746  208368 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 02:26:32.308830  208368 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1119 02:26:32.308966  208368 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 02:26:32.309030  208368 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 02:26:32.309010  208368 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 02:26:32.309062  208368 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:26:32.309177  208368 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 02:26:32.309303  208368 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1119 02:26:32.473580  208368 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1119 02:26:32.473652  208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1119 02:26:32.482356  208368 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1119 02:26:32.482461  208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 02:26:32.482459  208368 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1119 02:26:32.482537  208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1119 02:26:32.509847  208368 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1119 02:26:32.509948  208368 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 02:26:32.510008  208368 ssh_runner.go:195] Run: which crictl
	I1119 02:26:32.512254  208368 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I1119 02:26:32.512315  208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1119 02:26:32.516765  208368 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1119 02:26:32.516839  208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1119 02:26:32.520899  208368 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1119 02:26:32.521016  208368 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 02:26:32.521087  208368 ssh_runner.go:195] Run: which crictl
	I1119 02:26:32.521427  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 02:26:32.521537  208368 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1119 02:26:32.521570  208368 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 02:26:32.521632  208368 ssh_runner.go:195] Run: which crictl
	I1119 02:26:32.522473  208368 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1119 02:26:32.522519  208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1119 02:26:32.524864  208368 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1119 02:26:32.524949  208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1119 02:26:32.549668  208368 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1119 02:26:32.549718  208368 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1119 02:26:32.549772  208368 ssh_runner.go:195] Run: which crictl
	I1119 02:26:32.563761  208368 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1119 02:26:32.563807  208368 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1119 02:26:32.563866  208368 ssh_runner.go:195] Run: which crictl
	I1119 02:26:32.563990  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 02:26:32.568000  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 02:26:32.568327  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 02:26:32.571812  208368 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1119 02:26:32.572115  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 02:26:32.572125  208368 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 02:26:32.572181  208368 ssh_runner.go:195] Run: which crictl
	I1119 02:26:32.572056  208368 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1119 02:26:32.572218  208368 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 02:26:32.572238  208368 ssh_runner.go:195] Run: which crictl
	I1119 02:26:32.572307  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 02:26:32.659950  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 02:26:32.660037  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 02:26:32.660107  208368 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1119 02:26:32.660255  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 02:26:32.660849  208368 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1119 02:26:32.660989  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 02:26:32.661014  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 02:26:32.706989  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 02:26:32.707021  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 02:26:32.707049  208368 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1119 02:26:32.795059  208368 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1119 02:26:32.795132  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 02:26:32.795168  208368 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1119 02:26:32.795181  208368 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1119 02:26:32.832828  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 02:26:32.864167  208368 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1119 02:26:33.619208  208368 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1119 02:26:33.619275  208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:26:33.644910  208368 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1119 02:26:33.644965  208368 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:26:33.645010  208368 ssh_runner.go:195] Run: which crictl
	I1119 02:26:33.650104  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:26:33.678431  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:26:33.706942  208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:26:33.735119  208368 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1119 02:26:33.735207  208368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1119 02:26:33.739320  208368 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1119 02:26:33.739346  208368 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1119 02:26:33.739417  208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1119 02:26:33.957571  208368 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1119 02:26:33.957634  208368 cache_images.go:94] duration metric: took 1.650855136s to LoadCachedImages
	W1119 02:26:33.957710  208368 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1: no such file or directory
	I1119 02:26:33.957725  208368 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1119 02:26:33.957842  208368 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-896338 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-896338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:26:33.957910  208368 ssh_runner.go:195] Run: sudo crictl info
	I1119 02:26:33.988381  208368 cni.go:84] Creating CNI manager for ""
	I1119 02:26:33.988404  208368 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:26:33.988422  208368 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:26:33.988451  208368 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-896338 NodeName:kubernetes-upgrade-896338 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:26:33.988606  208368 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-896338"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:26:33.988691  208368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:26:33.999039  208368 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:26:33.999108  208368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:26:34.007943  208368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1119 02:26:34.025266  208368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:26:34.041893  208368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1119 02:26:34.055278  208368 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:26:34.059867  208368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:26:34.168130  208368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:26:34.188952  208368 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338 for IP: 192.168.85.2
	I1119 02:26:34.188975  208368 certs.go:195] generating shared ca certs ...
	I1119 02:26:34.188994  208368 certs.go:227] acquiring lock for ca certs: {Name:mk11d6789b2333e17b3937495b501fbcca15c242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:26:34.189150  208368 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key
	I1119 02:26:34.189210  208368 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key
	I1119 02:26:34.189218  208368 certs.go:257] generating profile certs ...
	I1119 02:26:34.189309  208368 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/client.key
	I1119 02:26:34.189359  208368 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/apiserver.key.6cf5ace0
	I1119 02:26:34.189420  208368 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/proxy-client.key
	I1119 02:26:34.189559  208368 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem (1338 bytes)
	W1119 02:26:34.189589  208368 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657_empty.pem, impossibly tiny 0 bytes
	I1119 02:26:34.189599  208368 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 02:26:34.189629  208368 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:26:34.189658  208368 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:26:34.189687  208368 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem (1675 bytes)
	I1119 02:26:34.189735  208368 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:26:34.190526  208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:26:34.220408  208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 02:26:34.247878  208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:26:34.276203  208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:26:34.299606  208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1119 02:26:34.321787  208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:26:34.340957  208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:26:34.363167  208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 02:26:34.386180  208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:26:34.407202  208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem --> /usr/share/ca-certificates/14657.pem (1338 bytes)
	I1119 02:26:34.429119  208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /usr/share/ca-certificates/146572.pem (1708 bytes)
	I1119 02:26:34.450412  208368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:26:34.465425  208368 ssh_runner.go:195] Run: openssl version
	I1119 02:26:34.473391  208368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:26:34.483962  208368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:26:34.488737  208368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:57 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:26:34.488800  208368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:26:34.529195  208368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:26:34.539184  208368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14657.pem && ln -fs /usr/share/ca-certificates/14657.pem /etc/ssl/certs/14657.pem"
	I1119 02:26:34.549392  208368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14657.pem
	I1119 02:26:34.554191  208368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14657.pem
	I1119 02:26:34.554255  208368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14657.pem
	I1119 02:26:34.595783  208368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14657.pem /etc/ssl/certs/51391683.0"
	I1119 02:26:34.607410  208368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146572.pem && ln -fs /usr/share/ca-certificates/146572.pem /etc/ssl/certs/146572.pem"
	I1119 02:26:34.621578  208368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146572.pem
	I1119 02:26:34.629723  208368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146572.pem
	I1119 02:26:34.629786  208368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146572.pem
	I1119 02:26:34.675928  208368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146572.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:26:34.685390  208368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:26:34.690500  208368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 02:26:34.740572  208368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 02:26:34.791600  208368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 02:26:34.832919  208368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 02:26:34.880790  208368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 02:26:34.934254  208368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 02:26:35.000402  208368 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-896338 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-896338 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:26:35.000499  208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 02:26:35.000558  208368 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:26:35.045018  208368 cri.go:89] found id: ""
	I1119 02:26:35.045078  208368 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:26:35.058407  208368 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 02:26:35.058427  208368 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 02:26:35.058476  208368 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 02:26:35.069042  208368 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:26:35.069857  208368 kubeconfig.go:125] found "kubernetes-upgrade-896338" server: "https://192.168.85.2:8443"
	I1119 02:26:35.070789  208368 kapi.go:59] client config for kubernetes-upgrade-896338: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/client.crt", KeyFile:"/home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/client.key", CAFile:"/home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 02:26:35.071327  208368 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1119 02:26:35.071349  208368 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1119 02:26:35.071355  208368 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1119 02:26:35.071361  208368 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1119 02:26:35.071383  208368 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1119 02:26:35.071789  208368 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 02:26:35.082968  208368 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 02:26:35.083054  208368 kubeadm.go:602] duration metric: took 24.617333ms to restartPrimaryControlPlane
	I1119 02:26:35.083083  208368 kubeadm.go:403] duration metric: took 82.694531ms to StartCluster
	I1119 02:26:35.083115  208368 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:26:35.083201  208368 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:26:35.084225  208368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:26:35.084544  208368 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:26:35.084723  208368 config.go:182] Loaded profile config "kubernetes-upgrade-896338": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:26:35.084786  208368 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:26:35.084893  208368 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-896338"
	I1119 02:26:35.084909  208368 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-896338"
	W1119 02:26:35.084917  208368 addons.go:248] addon storage-provisioner should already be in state true
	I1119 02:26:35.085013  208368 host.go:66] Checking if "kubernetes-upgrade-896338" exists ...
	I1119 02:26:35.084965  208368 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-896338"
	I1119 02:26:35.085102  208368 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-896338"
	I1119 02:26:35.085512  208368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-896338 --format={{.State.Status}}
	I1119 02:26:35.085541  208368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-896338 --format={{.State.Status}}
	I1119 02:26:35.087156  208368 out.go:179] * Verifying Kubernetes components...
	I1119 02:26:35.088522  208368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:26:35.113426  208368 kapi.go:59] client config for kubernetes-upgrade-896338: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/client.crt", KeyFile:"/home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/client.key", CAFile:"/home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 02:26:35.113774  208368 addons.go:239] Setting addon default-storageclass=true in "kubernetes-upgrade-896338"
	W1119 02:26:35.113797  208368 addons.go:248] addon default-storageclass should already be in state true
	I1119 02:26:35.113825  208368 host.go:66] Checking if "kubernetes-upgrade-896338" exists ...
	I1119 02:26:35.114304  208368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-896338 --format={{.State.Status}}
	I1119 02:26:35.116489  208368 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:26:35.117748  208368 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:26:35.117768  208368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:26:35.117837  208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
	I1119 02:26:35.146800  208368 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:26:35.146822  208368 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:26:35.146879  208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
	I1119 02:26:35.154838  208368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/kubernetes-upgrade-896338/id_rsa Username:docker}
	I1119 02:26:35.178909  208368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/kubernetes-upgrade-896338/id_rsa Username:docker}
	I1119 02:26:35.241005  208368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:26:35.260582  208368 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:26:35.260679  208368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:26:35.273672  208368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:26:35.279303  208368 api_server.go:72] duration metric: took 194.706746ms to wait for apiserver process to appear ...
	I1119 02:26:35.279335  208368 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:26:35.279355  208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:26:35.302621  208368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:26:37.285536  208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:26:37.285580  208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:26:37.285604  208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:26:39.291386  208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:26:39.291427  208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:26:39.291443  208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:26:41.296716  208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:26:41.296754  208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:26:41.296773  208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:26:43.301986  208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:26:43.302018  208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:26:43.302037  208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:26:43.306106  208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:26:43.306129  208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:26:43.779719  208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:26:45.784542  208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:26:45.784578  208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:26:45.784599  208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:26:47.791094  208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:26:47.791253  208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:26:47.791291  208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:26:49.797459  208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:26:49.797493  208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:26:49.797521  208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:26:51.803353  208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:26:51.803390  208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:26:51.803406  208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:26:53.808875  208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:26:53.808921  208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:26:53.808970  208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:26:55.814340  208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:26:55.814388  208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:26:55.814419  208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:26:57.819781  208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:26:57.819812  208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:26:57.819835  208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:26:59.824558  208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:26:59.824582  208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:26:59.824597  208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:27:01.830389  208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:27:01.830419  208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:27:01.830438  208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:27:06.831618  208368 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1119 02:27:06.831657  208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:27:11.832438  208368 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1119 02:27:11.832478  208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:27:13.505962  208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 02:27:13.512292  208368 api_server.go:141] control plane version: v1.34.1
	I1119 02:27:13.512321  208368 api_server.go:131] duration metric: took 38.232976455s to wait for apiserver health ...
	I1119 02:27:13.512332  208368 system_pods.go:43] waiting for kube-system pods to appear ...
	W1119 02:28:13.513502  208368 system_pods.go:55] pod list returned error: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
	I1119 02:28:13.513542  208368 retry.go:31] will retry after 218.685635ms: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
	I1119 02:28:13.732637  208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:28:13.732729  208368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:31:35.676886  208368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5m0.403131384s)
	W1119 02:31:35.676952  208368 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
	W1119 02:31:35.677096  208368 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
	]
	I1119 02:31:35.677167  208368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5m0.374454709s)
	W1119 02:31:35.677217  208368 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
	I1119 02:31:35.677247  208368 ssh_runner.go:235] Completed: sudo crictl ps -a --quiet --name=kube-apiserver: (3m21.944479492s)
	I1119 02:31:35.677269  208368 cri.go:89] found id: "24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9"
	I1119 02:31:35.677275  208368 cri.go:89] found id: ""
	I1119 02:31:35.677284  208368 logs.go:282] 1 containers: [24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9]
	W1119 02:31:35.677299  208368 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
	]
	I1119 02:31:35.677338  208368 ssh_runner.go:195] Run: which crictl
	I1119 02:31:35.679965  208368 out.go:179] * Enabled addons: 
	I1119 02:31:35.681147  208368 addons.go:515] duration metric: took 5m0.596355676s for enable addons: enabled=[]
	I1119 02:31:35.683740  208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1119 02:31:35.683811  208368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:31:35.725560  208368 cri.go:89] found id: "f7df69037dad73c346bafade9f17ccda547baf86f109ee96ebf9ec5074fdc32c"
	I1119 02:31:35.725584  208368 cri.go:89] found id: ""
	I1119 02:31:35.725593  208368 logs.go:282] 1 containers: [f7df69037dad73c346bafade9f17ccda547baf86f109ee96ebf9ec5074fdc32c]
	I1119 02:31:35.725653  208368 ssh_runner.go:195] Run: which crictl
	I1119 02:31:35.730817  208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1119 02:31:35.730897  208368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:31:35.774777  208368 cri.go:89] found id: ""
	I1119 02:31:35.774803  208368 logs.go:282] 0 containers: []
	W1119 02:31:35.774812  208368 logs.go:284] No container was found matching "coredns"
	I1119 02:31:35.774818  208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:31:35.774871  208368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:31:35.817744  208368 cri.go:89] found id: "2fc1c7d64ddfc8cfae76fafb1d2818e8e60acd2e091805d791cfdd40dbc01017"
	I1119 02:31:35.817771  208368 cri.go:89] found id: ""
	I1119 02:31:35.817781  208368 logs.go:282] 1 containers: [2fc1c7d64ddfc8cfae76fafb1d2818e8e60acd2e091805d791cfdd40dbc01017]
	I1119 02:31:35.817843  208368 ssh_runner.go:195] Run: which crictl
	I1119 02:31:35.824002  208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:31:35.824267  208368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:31:35.869793  208368 cri.go:89] found id: ""
	I1119 02:31:35.869824  208368 logs.go:282] 0 containers: []
	W1119 02:31:35.869834  208368 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:31:35.869841  208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:31:35.869898  208368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:31:35.910763  208368 cri.go:89] found id: "1ba0c8fe18b0c917482c746cfef00696629bcc9748d8c3e10ced55d71c2c1a03"
	I1119 02:31:35.910785  208368 cri.go:89] found id: ""
	I1119 02:31:35.910794  208368 logs.go:282] 1 containers: [1ba0c8fe18b0c917482c746cfef00696629bcc9748d8c3e10ced55d71c2c1a03]
	I1119 02:31:35.910866  208368 ssh_runner.go:195] Run: which crictl
	I1119 02:31:35.916693  208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1119 02:31:35.916769  208368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:31:35.956637  208368 cri.go:89] found id: ""
	I1119 02:31:35.956666  208368 logs.go:282] 0 containers: []
	W1119 02:31:35.956677  208368 logs.go:284] No container was found matching "kindnet"
	I1119 02:31:35.956684  208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:31:35.956758  208368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:31:35.998527  208368 cri.go:89] found id: ""
	I1119 02:31:35.998632  208368 logs.go:282] 0 containers: []
	W1119 02:31:35.998644  208368 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:31:35.998661  208368 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:31:35.998678  208368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1119 02:32:36.105090  208368 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.106387506s)
	W1119 02:32:36.105143  208368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1119 02:32:36.105158  208368 logs.go:123] Gathering logs for kube-apiserver [24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9] ...
	I1119 02:32:36.105171  208368 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9"
	W1119 02:32:36.132063  208368 logs.go:130] failed kube-apiserver [24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9": Process exited with status 1
	stdout:
	
	stderr:
	E1119 02:32:36.129725    3701 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9\": not found" containerID="24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9"
	time="2025-11-19T02:32:36Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9\": not found"
	 output: 
	** stderr ** 
	E1119 02:32:36.129725    3701 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9\": not found" containerID="24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9"
	time="2025-11-19T02:32:36Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9\": not found"
	
	** /stderr **
	I1119 02:32:36.132092  208368 logs.go:123] Gathering logs for etcd [f7df69037dad73c346bafade9f17ccda547baf86f109ee96ebf9ec5074fdc32c] ...
	I1119 02:32:36.132115  208368 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f7df69037dad73c346bafade9f17ccda547baf86f109ee96ebf9ec5074fdc32c"
	I1119 02:32:36.168586  208368 logs.go:123] Gathering logs for kube-controller-manager [1ba0c8fe18b0c917482c746cfef00696629bcc9748d8c3e10ced55d71c2c1a03] ...
	I1119 02:32:36.168617  208368 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ba0c8fe18b0c917482c746cfef00696629bcc9748d8c3e10ced55d71c2c1a03"
	I1119 02:32:36.200394  208368 logs.go:123] Gathering logs for kubelet ...
	I1119 02:32:36.200424  208368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1119 02:32:36.241938  208368 logs.go:138] Found kubelet problem: Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.359998    1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="66027d99863065946c7e847721e63c6c" pod="kube-system/kube-scheduler-kubernetes-upgrade-896338"
	W1119 02:32:36.242091  208368 logs.go:138] Found kubelet problem: Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.367766    1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="671496fd68155efc6c8e333483b2ec93" pod="kube-system/etcd-kubernetes-upgrade-896338"
	W1119 02:32:36.242234  208368 logs.go:138] Found kubelet problem: Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.369746    1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="66027d99863065946c7e847721e63c6c" pod="kube-system/kube-scheduler-kubernetes-upgrade-896338"
	W1119 02:32:36.242373  208368 logs.go:138] Found kubelet problem: Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.371861    1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="671496fd68155efc6c8e333483b2ec93" pod="kube-system/etcd-kubernetes-upgrade-896338"
	I1119 02:32:36.296651  208368 logs.go:123] Gathering logs for dmesg ...
	I1119 02:32:36.296695  208368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:32:36.313476  208368 logs.go:123] Gathering logs for kube-scheduler [2fc1c7d64ddfc8cfae76fafb1d2818e8e60acd2e091805d791cfdd40dbc01017] ...
	I1119 02:32:36.313518  208368 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2fc1c7d64ddfc8cfae76fafb1d2818e8e60acd2e091805d791cfdd40dbc01017"
	I1119 02:32:36.342697  208368 logs.go:123] Gathering logs for containerd ...
	I1119 02:32:36.342725  208368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1119 02:32:36.408496  208368 logs.go:123] Gathering logs for container status ...
	I1119 02:32:36.408523  208368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:32:36.439684  208368 out.go:374] Setting ErrFile to fd 2...
	I1119 02:32:36.439708  208368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1119 02:32:36.439772  208368 out.go:285] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1119 02:32:36.439787  208368 out.go:285]   Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.359998    1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="66027d99863065946c7e847721e63c6c" pod="kube-system/kube-scheduler-kubernetes-upgrade-896338"
	  Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.359998    1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="66027d99863065946c7e847721e63c6c" pod="kube-system/kube-scheduler-kubernetes-upgrade-896338"
	W1119 02:32:36.439797  208368 out.go:285]   Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.367766    1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="671496fd68155efc6c8e333483b2ec93" pod="kube-system/etcd-kubernetes-upgrade-896338"
	  Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.367766    1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="671496fd68155efc6c8e333483b2ec93" pod="kube-system/etcd-kubernetes-upgrade-896338"
	W1119 02:32:36.439808  208368 out.go:285]   Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.369746    1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="66027d99863065946c7e847721e63c6c" pod="kube-system/kube-scheduler-kubernetes-upgrade-896338"
	  Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.369746    1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="66027d99863065946c7e847721e63c6c" pod="kube-system/kube-scheduler-kubernetes-upgrade-896338"
	W1119 02:32:36.439819  208368 out.go:285]   Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.371861    1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="671496fd68155efc6c8e333483b2ec93" pod="kube-system/etcd-kubernetes-upgrade-896338"
	  Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.371861    1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="671496fd68155efc6c8e333483b2ec93" pod="kube-system/etcd-kubernetes-upgrade-896338"
	I1119 02:32:36.439827  208368 out.go:374] Setting ErrFile to fd 2...
	I1119 02:32:36.439842  208368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1119 02:33:46.450455  208368 system_pods.go:55] pod list returned error: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
	I1119 02:33:46.452233  208368 out.go:203] 
	W1119 02:33:46.453522  208368 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for system pods: apiserver never returned a pod list
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for system pods: apiserver never returned a pod list
	W1119 02:33:46.453544  208368 out.go:285] * 
	* 
	W1119 02:33:46.455831  208368 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:33:46.457044  208368 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 80
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-11-19 02:33:46.536002962 +0000 UTC m=+2246.038109760
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-896338
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-896338:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "969b8bd4216afabb406559f3e1a22664d005617358fe9e598899f2ace66dabbe",
	        "Created": "2025-11-19T02:25:33.919793352Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 200955,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:25:56.498827348Z",
	            "FinishedAt": "2025-11-19T02:25:55.50311594Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/969b8bd4216afabb406559f3e1a22664d005617358fe9e598899f2ace66dabbe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/969b8bd4216afabb406559f3e1a22664d005617358fe9e598899f2ace66dabbe/hostname",
	        "HostsPath": "/var/lib/docker/containers/969b8bd4216afabb406559f3e1a22664d005617358fe9e598899f2ace66dabbe/hosts",
	        "LogPath": "/var/lib/docker/containers/969b8bd4216afabb406559f3e1a22664d005617358fe9e598899f2ace66dabbe/969b8bd4216afabb406559f3e1a22664d005617358fe9e598899f2ace66dabbe-json.log",
	        "Name": "/kubernetes-upgrade-896338",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-896338:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "kubernetes-upgrade-896338",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "969b8bd4216afabb406559f3e1a22664d005617358fe9e598899f2ace66dabbe",
	                "LowerDir": "/var/lib/docker/overlay2/631bd77c3dbf585bbd3c946ea070c38ae4ca0251671d2b73c9f02da374b73bd4-init/diff:/var/lib/docker/overlay2/de7938e6a920c133c8c6b988444cfbf6706fdc6982445229ca70e2488a725edb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/631bd77c3dbf585bbd3c946ea070c38ae4ca0251671d2b73c9f02da374b73bd4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/631bd77c3dbf585bbd3c946ea070c38ae4ca0251671d2b73c9f02da374b73bd4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/631bd77c3dbf585bbd3c946ea070c38ae4ca0251671d2b73c9f02da374b73bd4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-896338",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-896338/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-896338",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-896338",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-896338",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1153b4f9ef44bdc780101f996d65a35e48daf57d1eb0832294c5cf8db1dfc323",
	            "SandboxKey": "/var/run/docker/netns/1153b4f9ef44",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32989"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32990"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32993"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32991"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32992"
	                    }
	                ]
	            },
	            "Networks": {
	                "kubernetes-upgrade-896338": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ec6f45a7001c9838b1db6d7bcbc836f8d598109023fa2e585c2ea7beed066aa",
	                    "EndpointID": "7d69cba837f5e774db2e8b3f43d7f1317ce0691adab51bc67ff99a7934c17636",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "82:24:4b:ee:ad:76",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-896338",
	                        "969b8bd4216a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-896338 -n kubernetes-upgrade-896338
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-896338 -n kubernetes-upgrade-896338: exit status 2 (14.587429758s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E1119 02:34:01.144008  319649 status.go:466] Error apiserver status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[-]log failed: reason withheld
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-896338 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-896338 logs -n 25: (1m0.931898717s)
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-212776 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                        │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/kubernetes/kubelet.conf                                                                                                       │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /var/lib/kubelet/config.yaml                                                                                                       │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status docker --all --full --no-pager                                                                                        │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat docker --no-pager                                                                                                        │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/docker/daemon.json                                                                                                            │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo docker system info                                                                                                                     │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl status cri-docker --all --full --no-pager                                                                                    │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat cri-docker --no-pager                                                                                                    │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                               │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                         │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cri-dockerd --version                                                                                                                  │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status containerd --all --full --no-pager                                                                                    │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl cat containerd --no-pager                                                                                                    │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /lib/systemd/system/containerd.service                                                                                             │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/containerd/config.toml                                                                                                        │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo containerd config dump                                                                                                                 │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status crio --all --full --no-pager                                                                                          │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat crio --no-pager                                                                                                          │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo crio config                                                                                                                            │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ delete  │ -p bridge-212776                                                                                                                                             │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ start   │ -p embed-certs-168452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ embed-certs-168452     │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-691094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                 │ old-k8s-version-691094 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ stop    │ -p old-k8s-version-691094 --alsologtostderr -v=3                                                                                                             │ old-k8s-version-691094 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:33:19
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:33:19.818158  315363 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:33:19.818478  315363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:33:19.818490  315363 out.go:374] Setting ErrFile to fd 2...
	I1119 02:33:19.818495  315363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:33:19.818721  315363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:33:19.819330  315363 out.go:368] Setting JSON to false
	I1119 02:33:19.820616  315363 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4540,"bootTime":1763515060,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:33:19.820746  315363 start.go:143] virtualization: kvm guest
	I1119 02:33:19.822862  315363 out.go:179] * [embed-certs-168452] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:33:19.824498  315363 notify.go:221] Checking for updates...
	I1119 02:33:19.825083  315363 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:33:19.827189  315363 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:33:19.828628  315363 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:33:19.830282  315363 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	I1119 02:33:19.832156  315363 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:33:19.833558  315363 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:33:19.835289  315363 config.go:182] Loaded profile config "kubernetes-upgrade-896338": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:19.835456  315363 config.go:182] Loaded profile config "no-preload-483142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:19.835531  315363 config.go:182] Loaded profile config "old-k8s-version-691094": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 02:33:19.835628  315363 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:33:19.869670  315363 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:33:19.869754  315363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:33:19.948056  315363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-19 02:33:19.935291829 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:33:19.948230  315363 docker.go:319] overlay module found
	I1119 02:33:19.949713  315363 out.go:179] * Using the docker driver based on user configuration
	I1119 02:33:19.290831  301934 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:19.290855  301934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:33:19.290915  301934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:33:19.311399  301934 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:19.311423  301934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:33:19.311589  301934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:33:19.329209  301934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:33:19.348646  301934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:33:19.386878  301934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:33:19.430928  301934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:19.450594  301934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:19.476197  301934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:19.710133  301934 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1119 02:33:19.711417  301934 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-691094" to be "Ready" ...
	I1119 02:33:19.994360  301934 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:33:19.950788  315363 start.go:309] selected driver: docker
	I1119 02:33:19.950820  315363 start.go:930] validating driver "docker" against <nil>
	I1119 02:33:19.950835  315363 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:33:19.951688  315363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:33:20.027806  315363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-19 02:33:20.015781927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:33:20.028020  315363 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 02:33:20.028315  315363 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:33:20.030421  315363 out.go:179] * Using Docker driver with root privileges
	I1119 02:33:20.031895  315363 cni.go:84] Creating CNI manager for ""
	I1119 02:33:20.031986  315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:20.031997  315363 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:33:20.032066  315363 start.go:353] cluster config:
	{Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:33:20.034765  315363 out.go:179] * Starting "embed-certs-168452" primary control-plane node in "embed-certs-168452" cluster
	I1119 02:33:20.037487  315363 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 02:33:20.039029  315363 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:33:20.040485  315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:33:20.040520  315363 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1119 02:33:20.040528  315363 cache.go:65] Caching tarball of preloaded images
	I1119 02:33:20.040583  315363 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:33:20.040607  315363 preload.go:238] Found /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1119 02:33:20.040616  315363 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 02:33:20.040718  315363 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json ...
	I1119 02:33:20.040739  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json: {Name:mk2c1cb92572f9f7372f9d895b2c58b49c99bb3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:20.063579  315363 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:33:20.063610  315363 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:33:20.063636  315363 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:33:20.063670  315363 start.go:360] acquireMachinesLock for embed-certs-168452: {Name:mk4860299f8ff219c79992500844e49d455bd43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:33:20.063790  315363 start.go:364] duration metric: took 102.461µs to acquireMachinesLock for "embed-certs-168452"
	I1119 02:33:20.063835  315363 start.go:93] Provisioning new machine with config: &{Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:33:20.063944  315363 start.go:125] createHost starting for "" (driver="docker")
	I1119 02:33:19.995882  301934 addons.go:515] duration metric: took 741.418352ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:33:20.065989  315363 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 02:33:20.066193  315363 start.go:159] libmachine.API.Create for "embed-certs-168452" (driver="docker")
	I1119 02:33:20.066226  315363 client.go:173] LocalClient.Create starting
	I1119 02:33:20.066306  315363 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem
	I1119 02:33:20.066338  315363 main.go:143] libmachine: Decoding PEM data...
	I1119 02:33:20.066360  315363 main.go:143] libmachine: Parsing certificate...
	I1119 02:33:20.066438  315363 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem
	I1119 02:33:20.066464  315363 main.go:143] libmachine: Decoding PEM data...
	I1119 02:33:20.066475  315363 main.go:143] libmachine: Parsing certificate...
	I1119 02:33:20.066835  315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 02:33:20.087889  315363 cli_runner.go:211] docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 02:33:20.087987  315363 network_create.go:284] running [docker network inspect embed-certs-168452] to gather additional debugging logs...
	I1119 02:33:20.088020  315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452
	W1119 02:33:20.108512  315363 cli_runner.go:211] docker network inspect embed-certs-168452 returned with exit code 1
	I1119 02:33:20.108553  315363 network_create.go:287] error running [docker network inspect embed-certs-168452]: docker network inspect embed-certs-168452: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-168452 not found
	I1119 02:33:20.108577  315363 network_create.go:289] output of [docker network inspect embed-certs-168452]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-168452 not found
	
	** /stderr **
	I1119 02:33:20.108677  315363 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:33:20.129985  315363 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ed39016f2aa9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:16:a0:62:5a:51} reservation:<nil>}
	I1119 02:33:20.130640  315363 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-42b0c19d513b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b2:bf:ca:ce:21:95} reservation:<nil>}
	I1119 02:33:20.131454  315363 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-002e39e6dc05 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:8e:34:24:50:a5} reservation:<nil>}
	I1119 02:33:20.132210  315363 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c1155ea75a94 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:76:37:ad:5a:d8:36} reservation:<nil>}
	I1119 02:33:20.133253  315363 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-3ec6f45a7001 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:12:9a:69:49:8b:1f} reservation:<nil>}
	I1119 02:33:20.134343  315363 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ddf580}
	I1119 02:33:20.134393  315363 network_create.go:124] attempt to create docker network embed-certs-168452 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1119 02:33:20.134459  315363 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-168452 embed-certs-168452
	I1119 02:33:20.192566  315363 network_create.go:108] docker network embed-certs-168452 192.168.94.0/24 created
	I1119 02:33:20.192597  315363 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-168452" container
	I1119 02:33:20.192665  315363 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 02:33:20.216991  315363 cli_runner.go:164] Run: docker volume create embed-certs-168452 --label name.minikube.sigs.k8s.io=embed-certs-168452 --label created_by.minikube.sigs.k8s.io=true
	I1119 02:33:20.240868  315363 oci.go:103] Successfully created a docker volume embed-certs-168452
	I1119 02:33:20.240948  315363 cli_runner.go:164] Run: docker run --rm --name embed-certs-168452-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-168452 --entrypoint /usr/bin/test -v embed-certs-168452:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 02:33:20.653772  315363 oci.go:107] Successfully prepared a docker volume embed-certs-168452
	I1119 02:33:20.653851  315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:33:20.653866  315363 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 02:33:20.653963  315363 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-168452:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 02:33:20.215687  301934 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-691094" context rescaled to 1 replicas
	W1119 02:33:21.715210  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	W1119 02:33:24.323644  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	I1119 02:33:28.147893  307222 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:33:28.147982  307222 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:33:28.148104  307222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:33:28.148201  307222 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:33:28.148256  307222 kubeadm.go:319] OS: Linux
	I1119 02:33:28.148332  307222 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:33:28.148450  307222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:33:28.148522  307222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:33:28.148596  307222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:33:28.148672  307222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:33:28.148756  307222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:33:28.148841  307222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:33:28.148915  307222 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:33:28.149019  307222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:33:28.149159  307222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:33:28.149311  307222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:33:28.149421  307222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:33:28.151537  307222 out.go:252]   - Generating certificates and keys ...
	I1119 02:33:28.151647  307222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:33:28.151774  307222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:33:28.151834  307222 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:33:28.151902  307222 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:33:28.152000  307222 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:33:28.152068  307222 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:33:28.152179  307222 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:33:28.152343  307222 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-483142] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:33:28.152451  307222 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:33:28.152598  307222 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-483142] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:33:28.152690  307222 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:33:28.152796  307222 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:33:28.152837  307222 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:33:28.152894  307222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:33:28.152945  307222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:33:28.153002  307222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:33:28.153051  307222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:33:28.153118  307222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:33:28.153171  307222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:33:28.153255  307222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:33:28.153358  307222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:33:28.154609  307222 out.go:252]   - Booting up control plane ...
	I1119 02:33:28.154709  307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:33:28.154821  307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:33:28.154904  307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:33:28.155033  307222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:33:28.155173  307222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:33:28.155323  307222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:33:28.155456  307222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:33:28.155501  307222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:33:28.155631  307222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:33:28.155728  307222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:33:28.155805  307222 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001464049s
	I1119 02:33:28.155906  307222 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:33:28.156017  307222 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 02:33:28.156135  307222 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:33:28.156242  307222 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:33:28.156335  307222 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.319882231s
	I1119 02:33:28.156456  307222 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.432703999s
	I1119 02:33:28.156560  307222 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001475545s
	I1119 02:33:28.156685  307222 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:33:28.156832  307222 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:33:28.156917  307222 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:33:28.157202  307222 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-483142 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:33:28.157272  307222 kubeadm.go:319] [bootstrap-token] Using token: nwrx92.0c942uuundzydmcz
	I1119 02:33:28.159046  307222 out.go:252]   - Configuring RBAC rules ...
	I1119 02:33:28.159207  307222 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:33:28.159328  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:33:28.159549  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:33:28.159720  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:33:28.159922  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:33:28.160077  307222 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:33:28.160254  307222 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:33:28.160329  307222 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:33:28.160427  307222 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:33:28.160443  307222 kubeadm.go:319] 
	I1119 02:33:28.160527  307222 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:33:28.160536  307222 kubeadm.go:319] 
	I1119 02:33:28.160603  307222 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:33:28.160610  307222 kubeadm.go:319] 
	I1119 02:33:28.160642  307222 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:33:28.160730  307222 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:33:28.160832  307222 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:33:28.160845  307222 kubeadm.go:319] 
	I1119 02:33:28.160922  307222 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:33:28.160942  307222 kubeadm.go:319] 
	I1119 02:33:28.161016  307222 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:33:28.161031  307222 kubeadm.go:319] 
	I1119 02:33:28.161114  307222 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:33:28.161229  307222 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:33:28.161347  307222 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:33:28.161359  307222 kubeadm.go:319] 
	I1119 02:33:28.161531  307222 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:33:28.161656  307222 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:33:28.161665  307222 kubeadm.go:319] 
	I1119 02:33:28.161797  307222 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token nwrx92.0c942uuundzydmcz \
	I1119 02:33:28.161968  307222 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a \
	I1119 02:33:28.162022  307222 kubeadm.go:319] 	--control-plane 
	I1119 02:33:28.162036  307222 kubeadm.go:319] 
	I1119 02:33:28.162163  307222 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:33:28.162174  307222 kubeadm.go:319] 
	I1119 02:33:28.162301  307222 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token nwrx92.0c942uuundzydmcz \
	I1119 02:33:28.162456  307222 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a 
	I1119 02:33:28.162469  307222 cni.go:84] Creating CNI manager for ""
	I1119 02:33:28.162475  307222 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:28.164382  307222 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:33:25.786283  315363 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-168452:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.132274902s)
	I1119 02:33:25.786322  315363 kic.go:203] duration metric: took 5.132452147s to extract preloaded images to volume ...
	W1119 02:33:25.786460  315363 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 02:33:25.786504  315363 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 02:33:25.786554  315363 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:33:25.853413  315363 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-168452 --name embed-certs-168452 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-168452 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-168452 --network embed-certs-168452 --ip 192.168.94.2 --volume embed-certs-168452:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:33:26.238651  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Running}}
	I1119 02:33:26.261169  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:26.284313  315363 cli_runner.go:164] Run: docker exec embed-certs-168452 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:33:26.336955  315363 oci.go:144] the created container "embed-certs-168452" has a running status.
	I1119 02:33:26.336985  315363 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa...
	I1119 02:33:26.484310  315363 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:33:26.517116  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:26.542901  315363 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:33:26.542925  315363 kic_runner.go:114] Args: [docker exec --privileged embed-certs-168452 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:33:26.595205  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:26.623359  315363 machine.go:94] provisionDockerMachine start ...
	I1119 02:33:26.623527  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:26.646254  315363 main.go:143] libmachine: Using SSH client type: native
	I1119 02:33:26.646550  315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1119 02:33:26.646569  315363 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:33:26.799221  315363 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-168452
	
	I1119 02:33:26.799250  315363 ubuntu.go:182] provisioning hostname "embed-certs-168452"
	I1119 02:33:26.799334  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:26.820929  315363 main.go:143] libmachine: Using SSH client type: native
	I1119 02:33:26.821188  315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1119 02:33:26.821210  315363 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-168452 && echo "embed-certs-168452" | sudo tee /etc/hostname
	I1119 02:33:26.966035  315363 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-168452
	
	I1119 02:33:26.966125  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:26.985276  315363 main.go:143] libmachine: Using SSH client type: native
	I1119 02:33:26.985598  315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1119 02:33:26.985633  315363 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-168452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-168452/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-168452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:33:27.121670  315363 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:33:27.121703  315363 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11107/.minikube}
	I1119 02:33:27.121727  315363 ubuntu.go:190] setting up certificates
	I1119 02:33:27.123000  315363 provision.go:84] configureAuth start
	I1119 02:33:27.123195  315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
	I1119 02:33:27.143490  315363 provision.go:143] copyHostCerts
	I1119 02:33:27.143570  315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem, removing ...
	I1119 02:33:27.143580  315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem
	I1119 02:33:27.143645  315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem (1082 bytes)
	I1119 02:33:27.143736  315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem, removing ...
	I1119 02:33:27.143744  315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem
	I1119 02:33:27.143773  315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem (1123 bytes)
	I1119 02:33:27.143829  315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem, removing ...
	I1119 02:33:27.143835  315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem
	I1119 02:33:27.143858  315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem (1675 bytes)
	I1119 02:33:27.143923  315363 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem org=jenkins.embed-certs-168452 san=[127.0.0.1 192.168.94.2 embed-certs-168452 localhost minikube]
	I1119 02:33:27.239080  315363 provision.go:177] copyRemoteCerts
	I1119 02:33:27.239165  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:33:27.239198  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.262397  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.362967  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:33:27.387666  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1119 02:33:27.418735  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:33:27.446098  315363 provision.go:87] duration metric: took 323.082791ms to configureAuth
	I1119 02:33:27.446129  315363 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:33:27.446313  315363 config.go:182] Loaded profile config "embed-certs-168452": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:27.446327  315363 machine.go:97] duration metric: took 822.891862ms to provisionDockerMachine
	I1119 02:33:27.446333  315363 client.go:176] duration metric: took 7.38010166s to LocalClient.Create
	I1119 02:33:27.446351  315363 start.go:167] duration metric: took 7.380160884s to libmachine.API.Create "embed-certs-168452"
	I1119 02:33:27.446358  315363 start.go:293] postStartSetup for "embed-certs-168452" (driver="docker")
	I1119 02:33:27.446409  315363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:33:27.446465  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:33:27.446501  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.470807  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.575097  315363 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:33:27.580067  315363 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:33:27.580102  315363 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:33:27.580115  315363 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/addons for local assets ...
	I1119 02:33:27.580188  315363 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/files for local assets ...
	I1119 02:33:27.580303  315363 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem -> 146572.pem in /etc/ssl/certs
	I1119 02:33:27.580434  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:33:27.588848  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:33:27.611498  315363 start.go:296] duration metric: took 165.12815ms for postStartSetup
	I1119 02:33:27.611895  315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
	I1119 02:33:27.630987  315363 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json ...
	I1119 02:33:27.631276  315363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:33:27.631327  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.650599  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.747119  315363 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:33:27.752242  315363 start.go:128] duration metric: took 7.68828048s to createHost
	I1119 02:33:27.752270  315363 start.go:83] releasing machines lock for "embed-certs-168452", held for 7.688466151s
	I1119 02:33:27.752448  315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
	I1119 02:33:27.772595  315363 ssh_runner.go:195] Run: cat /version.json
	I1119 02:33:27.772634  315363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:33:27.772668  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.772695  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.795020  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.795311  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.889466  315363 ssh_runner.go:195] Run: systemctl --version
	I1119 02:33:27.948057  315363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:33:27.953270  315363 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:33:27.953328  315363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:33:27.979962  315363 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 02:33:27.979983  315363 start.go:496] detecting cgroup driver to use...
	I1119 02:33:27.980013  315363 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:33:27.980050  315363 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 02:33:27.995148  315363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 02:33:28.009176  315363 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:33:28.009239  315363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:33:28.028120  315363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:33:28.047654  315363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:33:28.137742  315363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:33:28.233503  315363 docker.go:234] disabling docker service ...
	I1119 02:33:28.233569  315363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:33:28.254546  315363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:33:28.270970  315363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:33:28.372358  315363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:33:28.475816  315363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:33:28.494447  315363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:33:28.514112  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 02:33:28.528713  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 02:33:28.542307  315363 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 02:33:28.542395  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 02:33:28.553682  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:33:28.564425  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 02:33:28.574563  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:33:28.585047  315363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:33:28.594876  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 02:33:28.606066  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 02:33:28.616549  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 02:33:28.627283  315363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:33:28.635846  315363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:33:28.643854  315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:28.727138  315363 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 02:33:28.825075  315363 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 02:33:28.825141  315363 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 02:33:28.829886  315363 start.go:564] Will wait 60s for crictl version
	I1119 02:33:28.829954  315363 ssh_runner.go:195] Run: which crictl
	I1119 02:33:28.834062  315363 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:33:28.859386  315363 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 02:33:28.859454  315363 ssh_runner.go:195] Run: containerd --version
	I1119 02:33:28.881932  315363 ssh_runner.go:195] Run: containerd --version
	I1119 02:33:28.905418  315363 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 02:33:28.906851  315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:33:28.925576  315363 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1119 02:33:28.930043  315363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:33:28.941472  315363 kubeadm.go:884] updating cluster {Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:33:28.941570  315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:33:28.941633  315363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:33:28.969084  315363 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:33:28.969102  315363 containerd.go:534] Images already preloaded, skipping extraction
	I1119 02:33:28.969159  315363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:33:28.994529  315363 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:33:28.994549  315363 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:33:28.994556  315363 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
	I1119 02:33:28.994637  315363 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-168452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:33:28.994694  315363 ssh_runner.go:195] Run: sudo crictl info
	I1119 02:33:29.023174  315363 cni.go:84] Creating CNI manager for ""
	I1119 02:33:29.023197  315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:29.023211  315363 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:33:29.023232  315363 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-168452 NodeName:embed-certs-168452 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:33:29.023337  315363 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-168452"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:33:29.023423  315363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:33:29.032358  315363 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:33:29.032438  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:33:29.041206  315363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1119 02:33:29.056159  315363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:33:29.074583  315363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1119 02:33:29.089316  315363 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:33:29.093854  315363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:33:29.106602  315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:29.193818  315363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:29.220027  315363 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452 for IP: 192.168.94.2
	I1119 02:33:29.220053  315363 certs.go:195] generating shared ca certs ...
	I1119 02:33:29.220075  315363 certs.go:227] acquiring lock for ca certs: {Name:mk11d6789b2333e17b3937495b501fbcca15c242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.220231  315363 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key
	I1119 02:33:29.220278  315363 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key
	I1119 02:33:29.220287  315363 certs.go:257] generating profile certs ...
	I1119 02:33:29.220334  315363 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key
	I1119 02:33:29.220351  315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt with IP's: []
	I1119 02:33:29.496773  315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt ...
	I1119 02:33:29.496800  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt: {Name:mkdb5e24f9c8b0d3d9849ba91ac24e28be0abdf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.496993  315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key ...
	I1119 02:33:29.497006  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key: {Name:mk5aa88fe9180cc5f94c07d5a968428b4ccf37cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.497088  315363 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2
	I1119 02:33:29.497102  315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	W1119 02:33:26.721525  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	W1119 02:33:29.215940  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	I1119 02:33:28.165835  307222 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:33:28.176028  307222 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:33:28.176052  307222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:33:28.195615  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:33:28.450816  307222 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:33:28.450899  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:28.450933  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-483142 minikube.k8s.io/updated_at=2025_11_19T02_33_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=no-preload-483142 minikube.k8s.io/primary=true
	I1119 02:33:28.538275  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:28.538445  307222 ops.go:34] apiserver oom_adj: -16
	I1119 02:33:29.038968  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:29.539224  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:30.038530  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:30.539271  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:31.038434  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:31.538496  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:32.038945  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:32.539001  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:33.038571  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:33.129034  307222 kubeadm.go:1114] duration metric: took 4.678195875s to wait for elevateKubeSystemPrivileges
	I1119 02:33:33.129095  307222 kubeadm.go:403] duration metric: took 17.40558167s to StartCluster
	I1119 02:33:33.129119  307222 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:33.129202  307222 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:33:33.131182  307222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:33.131481  307222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:33:33.131519  307222 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:33:33.131585  307222 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:33:33.131706  307222 addons.go:70] Setting storage-provisioner=true in profile "no-preload-483142"
	I1119 02:33:33.131748  307222 config.go:182] Loaded profile config "no-preload-483142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:33.131794  307222 addons.go:70] Setting default-storageclass=true in profile "no-preload-483142"
	I1119 02:33:33.131827  307222 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-483142"
	I1119 02:33:33.131810  307222 addons.go:239] Setting addon storage-provisioner=true in "no-preload-483142"
	I1119 02:33:33.131959  307222 host.go:66] Checking if "no-preload-483142" exists ...
	I1119 02:33:33.132200  307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
	I1119 02:33:33.132480  307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
	I1119 02:33:33.134152  307222 out.go:179] * Verifying Kubernetes components...
	I1119 02:33:33.135585  307222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:33.159834  307222 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:33:33.160479  307222 addons.go:239] Setting addon default-storageclass=true in "no-preload-483142"
	I1119 02:33:33.160545  307222 host.go:66] Checking if "no-preload-483142" exists ...
	I1119 02:33:33.161017  307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
	I1119 02:33:33.161390  307222 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:33.161410  307222 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:33:33.161458  307222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-483142
	I1119 02:33:33.198354  307222 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:33.198390  307222 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:33:33.198448  307222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-483142
	I1119 02:33:33.198522  307222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/no-preload-483142/id_rsa Username:docker}
	I1119 02:33:33.223657  307222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/no-preload-483142/id_rsa Username:docker}
	I1119 02:33:33.248952  307222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:33:33.322673  307222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:33.348662  307222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:33.354901  307222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:33.503051  307222 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 02:33:33.504327  307222 node_ready.go:35] waiting up to 6m0s for node "no-preload-483142" to be "Ready" ...
	I1119 02:33:33.756829  307222 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:33:29.844643  315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 ...
	I1119 02:33:29.844667  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2: {Name:mk1596cf7137a998e277abf94c4c839907009a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.844872  315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2 ...
	I1119 02:33:29.844901  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2: {Name:mk9d817ab63555ebb02e0590916ce23352cf008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.845022  315363 certs.go:382] copying /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 -> /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt
	I1119 02:33:29.845144  315363 certs.go:386] copying /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2 -> /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key
	I1119 02:33:29.845239  315363 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key
	I1119 02:33:29.845260  315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt with IP's: []
	I1119 02:33:30.013529  315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt ...
	I1119 02:33:30.013564  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt: {Name:mka683634a30ab1845434f0fc49f75059694b447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:30.013775  315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key ...
	I1119 02:33:30.013796  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key: {Name:mk9e8dbde74fbcae5bb0e966570ae4f43c6f07e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:30.014054  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem (1338 bytes)
	W1119 02:33:30.014108  315363 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657_empty.pem, impossibly tiny 0 bytes
	I1119 02:33:30.014124  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 02:33:30.014183  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:33:30.014219  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:33:30.014257  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem (1675 bytes)
	I1119 02:33:30.014318  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:33:30.014986  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:33:30.034798  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 02:33:30.054155  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:33:30.074272  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:33:30.094396  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 02:33:30.114605  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:33:30.133991  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:33:30.153105  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 02:33:30.172052  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /usr/share/ca-certificates/146572.pem (1708 bytes)
	I1119 02:33:30.194139  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:33:30.212546  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem --> /usr/share/ca-certificates/14657.pem (1338 bytes)
	I1119 02:33:30.231534  315363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:33:30.246493  315363 ssh_runner.go:195] Run: openssl version
	I1119 02:33:30.252586  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146572.pem && ln -fs /usr/share/ca-certificates/146572.pem /etc/ssl/certs/146572.pem"
	I1119 02:33:30.261620  315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146572.pem
	I1119 02:33:30.265824  315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146572.pem
	I1119 02:33:30.265886  315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146572.pem
	I1119 02:33:30.301164  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146572.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:33:30.310429  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:33:30.319818  315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:33:30.323998  315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:57 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:33:30.324046  315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:33:30.360567  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:33:30.370492  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14657.pem && ln -fs /usr/share/ca-certificates/14657.pem /etc/ssl/certs/14657.pem"
	I1119 02:33:30.380695  315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14657.pem
	I1119 02:33:30.385171  315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14657.pem
	I1119 02:33:30.385241  315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14657.pem
	I1119 02:33:30.422375  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14657.pem /etc/ssl/certs/51391683.0"
	I1119 02:33:30.432329  315363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:33:30.436333  315363 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:33:30.436432  315363 kubeadm.go:401] StartCluster: {Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:33:30.436494  315363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 02:33:30.436588  315363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:33:30.465191  315363 cri.go:89] found id: ""
	I1119 02:33:30.465255  315363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:33:30.474328  315363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:33:30.483132  315363 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:33:30.483196  315363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:33:30.491249  315363 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:33:30.491272  315363 kubeadm.go:158] found existing configuration files:
	
	I1119 02:33:30.491320  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:33:30.499072  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:33:30.499140  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:33:30.507018  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:33:30.514836  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:33:30.514890  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:33:30.523396  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:33:30.532721  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:33:30.532772  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:33:30.541409  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:33:30.550090  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:33:30.550157  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:33:30.558693  315363 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:33:30.636057  315363 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:33:30.702518  315363 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1119 02:33:31.715333  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	W1119 02:33:33.715963  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	I1119 02:33:34.216972  301934 node_ready.go:49] node "old-k8s-version-691094" is "Ready"
	I1119 02:33:34.217010  301934 node_ready.go:38] duration metric: took 14.505569399s for node "old-k8s-version-691094" to be "Ready" ...
	I1119 02:33:34.217027  301934 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:33:34.217083  301934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:33:34.235995  301934 api_server.go:72] duration metric: took 14.98160502s to wait for apiserver process to appear ...
	I1119 02:33:34.236024  301934 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:33:34.236046  301934 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 02:33:34.242612  301934 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1119 02:33:34.244469  301934 api_server.go:141] control plane version: v1.28.0
	I1119 02:33:34.244501  301934 api_server.go:131] duration metric: took 8.468136ms to wait for apiserver health ...
	I1119 02:33:34.244512  301934 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:33:34.249250  301934 system_pods.go:59] 8 kube-system pods found
	I1119 02:33:34.249293  301934 system_pods.go:61] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.249301  301934 system_pods.go:61] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.249308  301934 system_pods.go:61] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.249326  301934 system_pods.go:61] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.249331  301934 system_pods.go:61] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.249336  301934 system_pods.go:61] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.249340  301934 system_pods.go:61] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.249347  301934 system_pods.go:61] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.249389  301934 system_pods.go:74] duration metric: took 4.842718ms to wait for pod list to return data ...
	I1119 02:33:34.249403  301934 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:33:34.251979  301934 default_sa.go:45] found service account: "default"
	I1119 02:33:34.252000  301934 default_sa.go:55] duration metric: took 2.59102ms for default service account to be created ...
	I1119 02:33:34.252008  301934 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:33:34.256098  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:34.256141  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.256148  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.256155  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.256158  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.256163  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.256166  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.256169  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.256173  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.256204  301934 retry.go:31] will retry after 294.08163ms: missing components: kube-dns
	I1119 02:33:34.555117  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:34.555149  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.555155  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.555160  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.555164  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.555168  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.555171  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.555174  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.555181  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.555200  301934 retry.go:31] will retry after 239.208285ms: missing components: kube-dns
	I1119 02:33:34.801314  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:34.801356  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.801397  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.801408  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.801414  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.801421  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.801426  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.801432  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.801446  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.801465  301934 retry.go:31] will retry after 406.320974ms: missing components: kube-dns
	I1119 02:33:33.758898  307222 addons.go:515] duration metric: took 627.311179ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:33:34.007122  307222 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-483142" context rescaled to 1 replicas
	W1119 02:33:35.507777  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	I1119 02:33:35.212153  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:35.212193  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:35.212202  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:35.212208  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:35.212214  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:35.212221  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:35.212226  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:35.212230  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:35.212235  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Running
	I1119 02:33:35.212252  301934 retry.go:31] will retry after 502.533324ms: missing components: kube-dns
	I1119 02:33:35.719172  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:35.719211  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Running
	I1119 02:33:35.719220  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:35.719225  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:35.719231  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:35.719238  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:35.719243  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:35.719248  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:35.719254  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Running
	I1119 02:33:35.719267  301934 system_pods.go:126] duration metric: took 1.46725409s to wait for k8s-apps to be running ...
	I1119 02:33:35.719280  301934 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:33:35.719333  301934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:33:35.733944  301934 system_svc.go:56] duration metric: took 14.654804ms WaitForService to wait for kubelet
	I1119 02:33:35.733974  301934 kubeadm.go:587] duration metric: took 16.479589704s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:33:35.733994  301934 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:33:35.736881  301934 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:33:35.736904  301934 node_conditions.go:123] node cpu capacity is 8
	I1119 02:33:35.736917  301934 node_conditions.go:105] duration metric: took 2.917087ms to run NodePressure ...
	I1119 02:33:35.736947  301934 start.go:242] waiting for startup goroutines ...
	I1119 02:33:35.736956  301934 start.go:247] waiting for cluster config update ...
	I1119 02:33:35.736966  301934 start.go:256] writing updated cluster config ...
	I1119 02:33:35.737252  301934 ssh_runner.go:195] Run: rm -f paused
	I1119 02:33:35.741706  301934 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:33:35.746693  301934 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-bbvqz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.751796  301934 pod_ready.go:94] pod "coredns-5dd5756b68-bbvqz" is "Ready"
	I1119 02:33:35.751821  301934 pod_ready.go:86] duration metric: took 5.102077ms for pod "coredns-5dd5756b68-bbvqz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.754811  301934 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.759826  301934 pod_ready.go:94] pod "etcd-old-k8s-version-691094" is "Ready"
	I1119 02:33:35.759852  301934 pod_ready.go:86] duration metric: took 5.017899ms for pod "etcd-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.763701  301934 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.768670  301934 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-691094" is "Ready"
	I1119 02:33:35.768693  301934 pod_ready.go:86] duration metric: took 4.969901ms for pod "kube-apiserver-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.772227  301934 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.146684  301934 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-691094" is "Ready"
	I1119 02:33:36.146718  301934 pod_ready.go:86] duration metric: took 374.468133ms for pod "kube-controller-manager-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.347472  301934 pod_ready.go:83] waiting for pod "kube-proxy-79df5" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.746791  301934 pod_ready.go:94] pod "kube-proxy-79df5" is "Ready"
	I1119 02:33:36.746855  301934 pod_ready.go:86] duration metric: took 399.347819ms for pod "kube-proxy-79df5" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.946961  301934 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:37.347059  301934 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-691094" is "Ready"
	I1119 02:33:37.347090  301934 pod_ready.go:86] duration metric: took 400.10454ms for pod "kube-scheduler-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:37.347108  301934 pod_ready.go:40] duration metric: took 1.605370699s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:33:37.406793  301934 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1119 02:33:37.408685  301934 out.go:203] 
	W1119 02:33:37.410052  301934 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 02:33:37.411691  301934 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 02:33:37.413481  301934 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-691094" cluster and "default" namespace by default
	W1119 02:33:37.511440  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	W1119 02:33:40.007282  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	I1119 02:33:42.519187  315363 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:33:42.519270  315363 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:33:42.519471  315363 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:33:42.519558  315363 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:33:42.519641  315363 kubeadm.go:319] OS: Linux
	I1119 02:33:42.519723  315363 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:33:42.519793  315363 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:33:42.519863  315363 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:33:42.519937  315363 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:33:42.520011  315363 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:33:42.520082  315363 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:33:42.520161  315363 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:33:42.520246  315363 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:33:42.520396  315363 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:33:42.520528  315363 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:33:42.520640  315363 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:33:42.520739  315363 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:33:42.522619  315363 out.go:252]   - Generating certificates and keys ...
	I1119 02:33:42.522717  315363 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:33:42.522778  315363 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:33:42.522841  315363 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:33:42.522898  315363 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:33:42.522948  315363 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:33:42.522986  315363 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:33:42.523065  315363 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:33:42.523231  315363 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-168452 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1119 02:33:42.523301  315363 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:33:42.523451  315363 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-168452 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1119 02:33:42.523527  315363 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:33:42.523599  315363 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:33:42.523658  315363 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:33:42.523737  315363 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:33:42.523787  315363 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:33:42.523833  315363 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:33:42.523879  315363 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:33:42.523945  315363 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:33:42.524004  315363 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:33:42.524082  315363 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:33:42.524137  315363 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:33:42.525751  315363 out.go:252]   - Booting up control plane ...
	I1119 02:33:42.525831  315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:33:42.525893  315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:33:42.525997  315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:33:42.526121  315363 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:33:42.526235  315363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:33:42.526323  315363 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:33:42.526401  315363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:33:42.526441  315363 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:33:42.526546  315363 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:33:42.526633  315363 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:33:42.526684  315363 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001668097s
	I1119 02:33:42.526759  315363 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:33:42.526828  315363 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1119 02:33:42.526912  315363 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:33:42.526979  315363 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:33:42.527060  315363 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.143588684s
	I1119 02:33:42.527116  315363 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.751163591s
	I1119 02:33:42.527185  315363 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002351229s
	I1119 02:33:42.527279  315363 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:33:42.527418  315363 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:33:42.527475  315363 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:33:42.527642  315363 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-168452 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:33:42.527698  315363 kubeadm.go:319] [bootstrap-token] Using token: f9q4qi.t8dfm2zfbs2z2sgs
	I1119 02:33:42.529100  315363 out.go:252]   - Configuring RBAC rules ...
	I1119 02:33:42.529232  315363 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:33:42.529348  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:33:42.529576  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:33:42.529779  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:33:42.529949  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:33:42.530070  315363 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:33:42.530217  315363 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:33:42.530321  315363 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:33:42.530403  315363 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:33:42.530413  315363 kubeadm.go:319] 
	I1119 02:33:42.530492  315363 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:33:42.530502  315363 kubeadm.go:319] 
	I1119 02:33:42.530604  315363 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:33:42.530618  315363 kubeadm.go:319] 
	I1119 02:33:42.530647  315363 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:33:42.530726  315363 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:33:42.530797  315363 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:33:42.530809  315363 kubeadm.go:319] 
	I1119 02:33:42.530880  315363 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:33:42.530885  315363 kubeadm.go:319] 
	I1119 02:33:42.530954  315363 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:33:42.530981  315363 kubeadm.go:319] 
	I1119 02:33:42.531052  315363 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:33:42.531164  315363 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:33:42.531261  315363 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:33:42.531271  315363 kubeadm.go:319] 
	I1119 02:33:42.531424  315363 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:33:42.531551  315363 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:33:42.531570  315363 kubeadm.go:319] 
	I1119 02:33:42.531690  315363 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token f9q4qi.t8dfm2zfbs2z2sgs \
	I1119 02:33:42.531850  315363 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a \
	I1119 02:33:42.531878  315363 kubeadm.go:319] 	--control-plane 
	I1119 02:33:42.531885  315363 kubeadm.go:319] 
	I1119 02:33:42.531966  315363 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:33:42.531972  315363 kubeadm.go:319] 
	I1119 02:33:42.532046  315363 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f9q4qi.t8dfm2zfbs2z2sgs \
	I1119 02:33:42.532149  315363 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a 
	I1119 02:33:42.532161  315363 cni.go:84] Creating CNI manager for ""
	I1119 02:33:42.532167  315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:42.535194  315363 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:33:42.536650  315363 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:33:42.541710  315363 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:33:42.541734  315363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:33:42.556040  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:33:42.817018  315363 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:33:42.817147  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:42.817217  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-168452 minikube.k8s.io/updated_at=2025_11_19T02_33_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=embed-certs-168452 minikube.k8s.io/primary=true
	I1119 02:33:42.828812  315363 ops.go:34] apiserver oom_adj: -16
	I1119 02:33:42.896633  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:43.396810  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:43.896801  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:44.397677  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1119 02:33:46.450455  208368 system_pods.go:55] pod list returned error: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
	I1119 02:33:46.452233  208368 out.go:203] 
	W1119 02:33:46.453522  208368 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for system pods: apiserver never returned a pod list
	W1119 02:33:46.453544  208368 out.go:285] * 
	W1119 02:33:46.455831  208368 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:33:46.457044  208368 out.go:203] 
	W1119 02:33:42.007484  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	W1119 02:33:44.007813  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	W1119 02:33:46.008192  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	I1119 02:33:44.897377  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:45.397137  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:45.897616  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:46.397448  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:46.896710  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:47.397632  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:47.897150  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:48.003028  315363 kubeadm.go:1114] duration metric: took 5.18596901s to wait for elevateKubeSystemPrivileges
	I1119 02:33:48.003056  315363 kubeadm.go:403] duration metric: took 17.566632128s to StartCluster
	I1119 02:33:48.003071  315363 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:48.003125  315363 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:33:48.005668  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:48.005964  315363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:33:48.005984  315363 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:33:48.006098  315363 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:33:48.006191  315363 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-168452"
	I1119 02:33:48.006211  315363 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-168452"
	I1119 02:33:48.006209  315363 addons.go:70] Setting default-storageclass=true in profile "embed-certs-168452"
	I1119 02:33:48.006218  315363 config.go:182] Loaded profile config "embed-certs-168452": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:48.006231  315363 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-168452"
	I1119 02:33:48.006249  315363 host.go:66] Checking if "embed-certs-168452" exists ...
	I1119 02:33:48.006692  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:48.006819  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:48.007901  315363 out.go:179] * Verifying Kubernetes components...
	I1119 02:33:48.009142  315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:48.032568  315363 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:33:48.032594  315363 addons.go:239] Setting addon default-storageclass=true in "embed-certs-168452"
	I1119 02:33:48.032649  315363 host.go:66] Checking if "embed-certs-168452" exists ...
	I1119 02:33:48.033140  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:48.034177  315363 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:48.034248  315363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:33:48.034332  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:48.063775  315363 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:48.063802  315363 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:33:48.063864  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:48.067763  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:48.088481  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:48.118977  315363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:33:48.181811  315363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:48.192106  315363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:48.217510  315363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:48.350174  315363 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1119 02:33:48.351838  315363 node_ready.go:35] waiting up to 6m0s for node "embed-certs-168452" to be "Ready" ...
	I1119 02:33:48.575859  315363 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:33:48.577031  315363 addons.go:515] duration metric: took 570.934719ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:33:48.855157  315363 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-168452" context rescaled to 1 replicas
	I1119 02:33:47.507132  307222 node_ready.go:49] node "no-preload-483142" is "Ready"
	I1119 02:33:47.507166  307222 node_ready.go:38] duration metric: took 14.002781703s for node "no-preload-483142" to be "Ready" ...
	I1119 02:33:47.507196  307222 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:33:47.507253  307222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:33:47.522586  307222 api_server.go:72] duration metric: took 14.39103106s to wait for apiserver process to appear ...
	I1119 02:33:47.522619  307222 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:33:47.522641  307222 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:33:47.526803  307222 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 02:33:47.527974  307222 api_server.go:141] control plane version: v1.34.1
	I1119 02:33:47.528002  307222 api_server.go:131] duration metric: took 5.376603ms to wait for apiserver health ...
	I1119 02:33:47.528022  307222 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:33:47.531978  307222 system_pods.go:59] 8 kube-system pods found
	I1119 02:33:47.532021  307222 system_pods.go:61] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:47.532030  307222 system_pods.go:61] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:47.532039  307222 system_pods.go:61] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:47.532046  307222 system_pods.go:61] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:47.532053  307222 system_pods.go:61] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:47.532059  307222 system_pods.go:61] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:47.532066  307222 system_pods.go:61] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:47.532078  307222 system_pods.go:61] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:47.532088  307222 system_pods.go:74] duration metric: took 4.058015ms to wait for pod list to return data ...
	I1119 02:33:47.532104  307222 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:33:47.535565  307222 default_sa.go:45] found service account: "default"
	I1119 02:33:47.535586  307222 default_sa.go:55] duration metric: took 3.475549ms for default service account to be created ...
	I1119 02:33:47.535596  307222 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:33:47.539134  307222 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:47.539173  307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:47.539181  307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:47.539188  307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:47.539192  307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:47.539196  307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:47.539204  307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:47.539210  307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:47.539215  307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:47.539249  307222 retry.go:31] will retry after 294.264342ms: missing components: kube-dns
	I1119 02:33:47.840195  307222 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:47.840235  307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:47.840244  307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:47.840253  307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:47.840257  307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:47.840262  307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:47.840267  307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:47.840272  307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:47.840288  307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:47.840308  307222 retry.go:31] will retry after 249.747879ms: missing components: kube-dns
	I1119 02:33:48.097280  307222 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:48.097316  307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:48.097322  307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:48.097331  307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:48.097336  307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:48.097342  307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:48.097346  307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:48.097350  307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:48.097356  307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:48.097389  307222 retry.go:31] will retry after 312.943754ms: missing components: kube-dns
	I1119 02:33:48.416167  307222 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:48.416224  307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:48.416233  307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:48.416242  307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:48.416249  307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:48.416265  307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:48.416285  307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:48.416290  307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:48.416304  307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:48.416338  307222 retry.go:31] will retry after 380.92269ms: missing components: kube-dns
	I1119 02:33:48.802673  307222 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:48.802712  307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Running
	I1119 02:33:48.802721  307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:48.802726  307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:48.802731  307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:48.802737  307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:48.802742  307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:48.802755  307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:48.802764  307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Running
	I1119 02:33:48.802775  307222 system_pods.go:126] duration metric: took 1.26717246s to wait for k8s-apps to be running ...
	I1119 02:33:48.802788  307222 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:33:48.802838  307222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:33:48.819234  307222 system_svc.go:56] duration metric: took 16.435872ms WaitForService to wait for kubelet
	I1119 02:33:48.819260  307222 kubeadm.go:587] duration metric: took 15.68771243s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:33:48.819276  307222 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:33:48.823861  307222 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:33:48.823901  307222 node_conditions.go:123] node cpu capacity is 8
	I1119 02:33:48.823924  307222 node_conditions.go:105] duration metric: took 4.642889ms to run NodePressure ...
	I1119 02:33:48.823938  307222 start.go:242] waiting for startup goroutines ...
	I1119 02:33:48.823947  307222 start.go:247] waiting for cluster config update ...
	I1119 02:33:48.823960  307222 start.go:256] writing updated cluster config ...
	I1119 02:33:48.824308  307222 ssh_runner.go:195] Run: rm -f paused
	I1119 02:33:48.829946  307222 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:33:48.834766  307222 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zgfk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.839922  307222 pod_ready.go:94] pod "coredns-66bc5c9577-zgfk9" is "Ready"
	I1119 02:33:48.839950  307222 pod_ready.go:86] duration metric: took 5.154322ms for pod "coredns-66bc5c9577-zgfk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.842702  307222 pod_ready.go:83] waiting for pod "etcd-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.848818  307222 pod_ready.go:94] pod "etcd-no-preload-483142" is "Ready"
	I1119 02:33:48.848850  307222 pod_ready.go:86] duration metric: took 6.115348ms for pod "etcd-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.851685  307222 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.856283  307222 pod_ready.go:94] pod "kube-apiserver-no-preload-483142" is "Ready"
	I1119 02:33:48.856303  307222 pod_ready.go:86] duration metric: took 4.595808ms for pod "kube-apiserver-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.858418  307222 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:49.235039  307222 pod_ready.go:94] pod "kube-controller-manager-no-preload-483142" is "Ready"
	I1119 02:33:49.235070  307222 pod_ready.go:86] duration metric: took 376.631643ms for pod "kube-controller-manager-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:49.435524  307222 pod_ready.go:83] waiting for pod "kube-proxy-xhrdt" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:49.834741  307222 pod_ready.go:94] pod "kube-proxy-xhrdt" is "Ready"
	I1119 02:33:49.834767  307222 pod_ready.go:86] duration metric: took 399.219221ms for pod "kube-proxy-xhrdt" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:50.035303  307222 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:50.434632  307222 pod_ready.go:94] pod "kube-scheduler-no-preload-483142" is "Ready"
	I1119 02:33:50.434662  307222 pod_ready.go:86] duration metric: took 399.329431ms for pod "kube-scheduler-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:50.434673  307222 pod_ready.go:40] duration metric: took 1.604675519s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:33:50.483179  307222 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:33:50.485257  307222 out.go:179] * Done! kubectl is now configured to use "no-preload-483142" cluster and "default" namespace by default
	W1119 02:33:50.355270  315363 node_ready.go:57] node "embed-certs-168452" has "Ready":"False" status (will retry)
	W1119 02:33:52.857401  315363 node_ready.go:57] node "embed-certs-168452" has "Ready":"False" status (will retry)
	W1119 02:33:55.355262  315363 node_ready.go:57] node "embed-certs-168452" has "Ready":"False" status (will retry)
	W1119 02:33:57.855402  315363 node_ready.go:57] node "embed-certs-168452" has "Ready":"False" status (will retry)
	I1119 02:33:58.855203  315363 node_ready.go:49] node "embed-certs-168452" is "Ready"
	I1119 02:33:58.855237  315363 node_ready.go:38] duration metric: took 10.503369895s for node "embed-certs-168452" to be "Ready" ...
	I1119 02:33:58.855255  315363 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:33:58.855343  315363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:33:58.869209  315363 api_server.go:72] duration metric: took 10.863154231s to wait for apiserver process to appear ...
	I1119 02:33:58.869250  315363 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:33:58.869274  315363 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 02:33:58.875569  315363 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1119 02:33:58.876575  315363 api_server.go:141] control plane version: v1.34.1
	I1119 02:33:58.876617  315363 api_server.go:131] duration metric: took 7.360045ms to wait for apiserver health ...
	I1119 02:33:58.876629  315363 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:33:58.880702  315363 system_pods.go:59] 8 kube-system pods found
	I1119 02:33:58.880740  315363 system_pods.go:61] "coredns-66bc5c9577-zjkgg" [5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:58.880760  315363 system_pods.go:61] "etcd-embed-certs-168452" [d0ec7dd4-3fea-4cb9-9409-d7580e3096e5] Running
	I1119 02:33:58.880773  315363 system_pods.go:61] "kindnet-rf6v9" [6e29d839-0594-41f7-bfd8-1f9ab66b4c86] Running
	I1119 02:33:58.880780  315363 system_pods.go:61] "kube-apiserver-embed-certs-168452" [1a173dec-e248-4772-8884-094a1416f6bc] Running
	I1119 02:33:58.880788  315363 system_pods.go:61] "kube-controller-manager-embed-certs-168452" [54a570a5-683f-435f-8ef3-801a384a4e4c] Running
	I1119 02:33:58.880793  315363 system_pods.go:61] "kube-proxy-v65n7" [edc341f0-decd-4b30-a13d-a730cb8fc47d] Running
	I1119 02:33:58.880798  315363 system_pods.go:61] "kube-scheduler-embed-certs-168452" [0547e424-6b3a-487f-94ba-a3f38ab4d102] Running
	I1119 02:33:58.880805  315363 system_pods.go:61] "storage-provisioner" [eebce997-029a-4da2-b6cd-bb0ff195ebbe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:58.880814  315363 system_pods.go:74] duration metric: took 4.173761ms to wait for pod list to return data ...
	I1119 02:33:58.880828  315363 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:33:58.888971  315363 default_sa.go:45] found service account: "default"
	I1119 02:33:58.888998  315363 default_sa.go:55] duration metric: took 8.162397ms for default service account to be created ...
	I1119 02:33:58.889023  315363 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:33:58.892650  315363 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:58.892685  315363 system_pods.go:89] "coredns-66bc5c9577-zjkgg" [5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:58.892694  315363 system_pods.go:89] "etcd-embed-certs-168452" [d0ec7dd4-3fea-4cb9-9409-d7580e3096e5] Running
	I1119 02:33:58.892703  315363 system_pods.go:89] "kindnet-rf6v9" [6e29d839-0594-41f7-bfd8-1f9ab66b4c86] Running
	I1119 02:33:58.892709  315363 system_pods.go:89] "kube-apiserver-embed-certs-168452" [1a173dec-e248-4772-8884-094a1416f6bc] Running
	I1119 02:33:58.892716  315363 system_pods.go:89] "kube-controller-manager-embed-certs-168452" [54a570a5-683f-435f-8ef3-801a384a4e4c] Running
	I1119 02:33:58.892721  315363 system_pods.go:89] "kube-proxy-v65n7" [edc341f0-decd-4b30-a13d-a730cb8fc47d] Running
	I1119 02:33:58.892726  315363 system_pods.go:89] "kube-scheduler-embed-certs-168452" [0547e424-6b3a-487f-94ba-a3f38ab4d102] Running
	I1119 02:33:58.892734  315363 system_pods.go:89] "storage-provisioner" [eebce997-029a-4da2-b6cd-bb0ff195ebbe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:58.892772  315363 retry.go:31] will retry after 264.439801ms: missing components: kube-dns
	I1119 02:33:59.162425  315363 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:59.162466  315363 system_pods.go:89] "coredns-66bc5c9577-zjkgg" [5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:59.162474  315363 system_pods.go:89] "etcd-embed-certs-168452" [d0ec7dd4-3fea-4cb9-9409-d7580e3096e5] Running
	I1119 02:33:59.162483  315363 system_pods.go:89] "kindnet-rf6v9" [6e29d839-0594-41f7-bfd8-1f9ab66b4c86] Running
	I1119 02:33:59.162488  315363 system_pods.go:89] "kube-apiserver-embed-certs-168452" [1a173dec-e248-4772-8884-094a1416f6bc] Running
	I1119 02:33:59.162494  315363 system_pods.go:89] "kube-controller-manager-embed-certs-168452" [54a570a5-683f-435f-8ef3-801a384a4e4c] Running
	I1119 02:33:59.162499  315363 system_pods.go:89] "kube-proxy-v65n7" [edc341f0-decd-4b30-a13d-a730cb8fc47d] Running
	I1119 02:33:59.162505  315363 system_pods.go:89] "kube-scheduler-embed-certs-168452" [0547e424-6b3a-487f-94ba-a3f38ab4d102] Running
	I1119 02:33:59.162512  315363 system_pods.go:89] "storage-provisioner" [eebce997-029a-4da2-b6cd-bb0ff195ebbe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:59.162533  315363 retry.go:31] will retry after 355.424259ms: missing components: kube-dns
	I1119 02:33:59.524153  315363 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:59.524197  315363 system_pods.go:89] "coredns-66bc5c9577-zjkgg" [5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:59.524212  315363 system_pods.go:89] "etcd-embed-certs-168452" [d0ec7dd4-3fea-4cb9-9409-d7580e3096e5] Running
	I1119 02:33:59.524223  315363 system_pods.go:89] "kindnet-rf6v9" [6e29d839-0594-41f7-bfd8-1f9ab66b4c86] Running
	I1119 02:33:59.524229  315363 system_pods.go:89] "kube-apiserver-embed-certs-168452" [1a173dec-e248-4772-8884-094a1416f6bc] Running
	I1119 02:33:59.524235  315363 system_pods.go:89] "kube-controller-manager-embed-certs-168452" [54a570a5-683f-435f-8ef3-801a384a4e4c] Running
	I1119 02:33:59.524241  315363 system_pods.go:89] "kube-proxy-v65n7" [edc341f0-decd-4b30-a13d-a730cb8fc47d] Running
	I1119 02:33:59.524255  315363 system_pods.go:89] "kube-scheduler-embed-certs-168452" [0547e424-6b3a-487f-94ba-a3f38ab4d102] Running
	I1119 02:33:59.524262  315363 system_pods.go:89] "storage-provisioner" [eebce997-029a-4da2-b6cd-bb0ff195ebbe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:59.524283  315363 retry.go:31] will retry after 458.998162ms: missing components: kube-dns
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                 NAMESPACE
	fde84f87a7c77       c80c8dbafe7dd       29 seconds ago       Exited              kube-controller-manager   7                   35551b04d3546       kube-controller-manager-kubernetes-upgrade-896338   kube-system
	138444193ad2d       c3994bc696102       About a minute ago   Exited              kube-apiserver            7                   e2959907c57f4       kube-apiserver-kubernetes-upgrade-896338            kube-system
	f7df69037dad7       5f1f5298c888d       6 minutes ago        Running             etcd                      0                   56e9fd844d8d6       etcd-kubernetes-upgrade-896338                      kube-system
	2fc1c7d64ddfc       7dd6aaa1717ab       6 minutes ago        Running             kube-scheduler            0                   f2a6405d8feb1       kube-scheduler-kubernetes-upgrade-896338            kube-system
	
	
	==> containerd <==
	Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.021062382Z" level=info msg="StartContainer for \"138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3\""
	Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.022118954Z" level=info msg="connecting to shim 138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3" address="unix:///run/containerd/s/e320be9675de49d356d8bea84184053a7dc60a98f39c19e3fba6dc0c23042a72" protocol=ttrpc version=3
	Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.033847191Z" level=info msg="container event discarded" container=24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9 type=CONTAINER_CREATED_EVENT
	Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.125229003Z" level=info msg="StartContainer for \"138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3\" returns successfully"
	Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.135942594Z" level=info msg="container event discarded" container=24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9 type=CONTAINER_STARTED_EVENT
	Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.172927081Z" level=info msg="received container exit event container_id:\"138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3\"  id:\"138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3\"  pid:3784  exit_status:1  exited_at:{seconds:1763519579  nanos:172596614}"
	Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.239267865Z" level=info msg="container event discarded" container=24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9 type=CONTAINER_STOPPED_EVENT
	Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.342715266Z" level=info msg="container event discarded" container=622ff9a93a32895a33023cb0085923493b05558510186b9e15b460a8cfe29a06 type=CONTAINER_DELETED_EVENT
	Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.986207240Z" level=info msg="RemoveContainer for \"06bfe3a0696dbfa4a3c0e0bebb72ad9841dbe9e784377890e1d9773d37735357\""
	Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.991223390Z" level=info msg="RemoveContainer for \"06bfe3a0696dbfa4a3c0e0bebb72ad9841dbe9e784377890e1d9773d37735357\" returns successfully"
	Nov 19 02:33:11 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:11.047409956Z" level=info msg="container event discarded" container=53296e4f2221b9bbfa0fd6e3750b279e6f5ff82e99f25639336cfa9d9c4fa7b1 type=CONTAINER_CREATED_EVENT
	Nov 19 02:33:11 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:11.237862984Z" level=info msg="container event discarded" container=53296e4f2221b9bbfa0fd6e3750b279e6f5ff82e99f25639336cfa9d9c4fa7b1 type=CONTAINER_STARTED_EVENT
	Nov 19 02:33:32 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:32.005469774Z" level=info msg="CreateContainer within sandbox \"35551b04d3546ac17b04f26ca16fef2308a03fbcbdcf783f23fe3c87100dabef\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:7,}"
	Nov 19 02:33:32 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:32.012626260Z" level=info msg="Container fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 02:33:32 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:32.021189286Z" level=info msg="CreateContainer within sandbox \"35551b04d3546ac17b04f26ca16fef2308a03fbcbdcf783f23fe3c87100dabef\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:7,} returns container id \"fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342\""
	Nov 19 02:33:32 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:32.021760601Z" level=info msg="StartContainer for \"fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342\""
	Nov 19 02:33:32 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:32.024979395Z" level=info msg="connecting to shim fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342" address="unix:///run/containerd/s/51870f9e62ca288f0c9b6fccb65cd55df1acecff3786dd06b1beeaee71a30efa" protocol=ttrpc version=3
	Nov 19 02:33:32 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:32.128897956Z" level=info msg="StartContainer for \"fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342\" returns successfully"
	Nov 19 02:33:44 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:44.291399293Z" level=info msg="received container exit event container_id:\"fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342\"  id:\"fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342\"  pid:3833  exit_status:1  exited_at:{seconds:1763519624  nanos:291133835}"
	Nov 19 02:33:45 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:45.030006716Z" level=info msg="container event discarded" container=324645156bf7a8fe278ec737183ebf2e2f74cff3d9677b348e2be20e9f44205e type=CONTAINER_CREATED_EVENT
	Nov 19 02:33:45 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:45.094231812Z" level=info msg="RemoveContainer for \"1ba0c8fe18b0c917482c746cfef00696629bcc9748d8c3e10ced55d71c2c1a03\""
	Nov 19 02:33:45 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:45.098658834Z" level=info msg="RemoveContainer for \"1ba0c8fe18b0c917482c746cfef00696629bcc9748d8c3e10ced55d71c2c1a03\" returns successfully"
	Nov 19 02:33:45 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:45.121149610Z" level=info msg="container event discarded" container=324645156bf7a8fe278ec737183ebf2e2f74cff3d9677b348e2be20e9f44205e type=CONTAINER_STARTED_EVENT
	Nov 19 02:33:45 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:45.200566583Z" level=info msg="container event discarded" container=324645156bf7a8fe278ec737183ebf2e2f74cff3d9677b348e2be20e9f44205e type=CONTAINER_STOPPED_EVENT
	Nov 19 02:33:45 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:45.448296380Z" level=info msg="container event discarded" container=24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9 type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[Nov19 02:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 74 0c d7 a6 53 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[ +28.680399] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 e9 7c 92 36 13 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[Nov19 02:32] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	[  +4.552839] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +11.086189] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 76 d1 26 7f 3d 08 06
	[  +0.000377] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[  +9.270754] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a2 49 fd 34 51 3b 08 06
	[  +0.000702] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +23.593864] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 86 43 5f 18 4c 08 06
	[  +0.000495] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	
	
	==> etcd [f7df69037dad73c346bafade9f17ccda547baf86f109ee96ebf9ec5074fdc32c] <==
	{"level":"info","ts":"2025-11-19T02:27:13.340874Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 3"}
	{"level":"info","ts":"2025-11-19T02:27:13.340937Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 3"}
	{"level":"info","ts":"2025-11-19T02:27:13.341019Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-19T02:27:13.341043Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-11-19T02:27:13.341066Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 4"}
	{"level":"info","ts":"2025-11-19T02:27:13.380650Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2025-11-19T02:27:13.380706Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-11-19T02:27:13.380747Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 4"}
	{"level":"info","ts":"2025-11-19T02:27:13.380766Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2025-11-19T02:27:13.440272Z","caller":"etcdserver/server.go:1804","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:kubernetes-upgrade-896338 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-19T02:27:13.440340Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T02:27:13.440332Z","caller":"etcdserver/server.go:2409","msg":"updating cluster version using v3 API","from":"3.5","to":"3.6"}
	{"level":"info","ts":"2025-11-19T02:27:13.440287Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T02:27:13.440492Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-19T02:27:13.440518Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-19T02:27:13.441695Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"warn","ts":"2025-11-19T02:27:13.441701Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-11-19T02:27:13.442214Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-19T02:27:13.445812Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-19T02:27:13.446133Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-19T02:27:13.504146Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.5","to":"3.6"}
	{"level":"info","ts":"2025-11-19T02:27:13.504730Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-19T02:27:13.504846Z","caller":"etcdserver/server.go:2424","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-19T02:27:13.504929Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-11-19T02:27:13.505097Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	
	
	==> kernel <==
	 02:35:01 up  1:17,  0 user,  load average: 3.86, 3.82, 2.65
	Linux kubernetes-upgrade-896338 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3] <==
	I1119 02:32:59.166043       1 options.go:263] external host was not specified, using 192.168.85.2
	I1119 02:32:59.168288       1 server.go:150] Version: v1.34.1
	I1119 02:32:59.168338       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1119 02:32:59.168696       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8443: listen tcp 0.0.0.0:8443: bind: address already in use"
	
	
	==> kube-controller-manager [fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342] <==
	I1119 02:33:32.602406       1 serving.go:386] Generated self-signed cert in-memory
	I1119 02:33:34.269353       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1119 02:33:34.269400       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:33:34.272111       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1119 02:33:34.272192       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1119 02:33:34.272452       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1119 02:33:34.272666       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1119 02:33:44.286563       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[-]log failed: reason withheld\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-scheduler [2fc1c7d64ddfc8cfae76fafb1d2818e8e60acd2e091805d791cfdd40dbc01017] <==
	I1119 02:27:11.053882       1 serving.go:386] Generated self-signed cert in-memory
	W1119 02:28:11.686321       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	W1119 02:28:11.686355       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 02:28:11.686383       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 02:28:11.712530       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 02:28:11.712558       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:28:11.716268       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:28:11.716607       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:28:11.717079       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 02:28:11.717269       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 02:28:11.816783       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1119 02:28:45.821489       1 event_broadcaster.go:270] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{storage-provisioner.18794773cf64041d  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},EventTime:2025-11-19 02:28:11.818492899 +0000 UTC m=+61.524664710,Series:nil,ReportingController:default-scheduler,ReportingInstance:default-scheduler-kubernetes-upgrade-896338,Action:Scheduling,Reason:FailedScheduling,Regarding:{Pod kube-system storage-provisioner f6d9e6ac-27ed-4a02-94ee-92ca173894d7 v1 428 },Related:nil,Note:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.,Type:Warning,DeprecatedSource:{ },DeprecatedFirstTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedLastTimestamp:0001-01-01 00:00:00 +0000 UTC,Deprec
atedCount:0,}"
	E1119 02:28:45.832113       1 pod_status_patch.go:111] "Failed to patch pod status" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/storage-provisioner"
	E1119 02:33:45.828418       1 pod_status_patch.go:111] "Failed to patch pod status" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/storage-provisioner"
	E1119 02:33:45.828690       1 event_broadcaster.go:270] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{storage-provisioner.18794773cf64041d  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},EventTime:2025-11-19 02:28:11.818492899 +0000 UTC m=+61.524664710,Series:&EventSeries{Count:2,LastObservedTime:2025-11-19 02:33:11.82615292 +0000 UTC m=+361.532324724,},ReportingController:default-scheduler,ReportingInstance:default-scheduler-kubernetes-upgrade-896338,Action:Scheduling,Reason:FailedScheduling,Regarding:{Pod kube-system storage-provisioner f6d9e6ac-27ed-4a02-94ee-92ca173894d7 v1 428 },Related:nil,Note:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.,Type:Warning,DeprecatedSource:{ },DeprecatedFirstTimestam
p:0001-01-01 00:00:00 +0000 UTC,DeprecatedLastTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedCount:0,}"
	
	
	==> kubelet <==
	Nov 19 02:34:20 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:34:20.003534    1153 scope.go:117] "RemoveContainer" containerID="fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342"
	Nov 19 02:34:20 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:20.003737    1153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-896338_kube-system(e32c4b2970efa8ef72e4afc8aa2f7038)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-896338" podUID="e32c4b2970efa8ef72e4afc8aa2f7038"
	Nov 19 02:34:23 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:34:23.003992    1153 scope.go:117] "RemoveContainer" containerID="138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3"
	Nov 19 02:34:23 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:23.004169    1153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-896338_kube-system(1d4af482de8ef1996b35bfa6adfca717)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-896338" podUID="1d4af482de8ef1996b35bfa6adfca717"
	Nov 19 02:34:25 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:25.129563    1153 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-apiserver-kubernetes-upgrade-896338.1879475d20211559  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-kubernetes-upgrade-896338,UID:1d4af482de8ef1996b35bfa6adfca717,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-896338,},FirstTimestamp:2025-11-19 02:26:34.388829529 +0000 UTC m=+20.466301320,LastTimestamp:2025-11-19 02:26:52.38598035 +0000 UTC m=+38.463452140,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,Repor
tingController:kubelet,ReportingInstance:kubernetes-upgrade-896338,}"
	Nov 19 02:34:29 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:29.005524    1153 mirror_client.go:139] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/kube-scheduler-kubernetes-upgrade-896338"
	Nov 19 02:34:31 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:34:31.003625    1153 scope.go:117] "RemoveContainer" containerID="fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342"
	Nov 19 02:34:31 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:31.003794    1153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-896338_kube-system(e32c4b2970efa8ef72e4afc8aa2f7038)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-896338" podUID="e32c4b2970efa8ef72e4afc8aa2f7038"
	Nov 19 02:34:31 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:31.765893    1153 kubelet_node_status.go:107] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="kubernetes-upgrade-896338"
	Nov 19 02:34:32 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:32.639539    1153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-896338?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Nov 19 02:34:34 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:34:34.004504    1153 scope.go:117] "RemoveContainer" containerID="138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3"
	Nov 19 02:34:34 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:34.004719    1153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-896338_kube-system(1d4af482de8ef1996b35bfa6adfca717)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-896338" podUID="1d4af482de8ef1996b35bfa6adfca717"
	Nov 19 02:34:38 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:34:38.768108    1153 kubelet_node_status.go:75] "Attempting to register node" node="kubernetes-upgrade-896338"
	Nov 19 02:34:45 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:34:45.003261    1153 scope.go:117] "RemoveContainer" containerID="fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342"
	Nov 19 02:34:45 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:45.003510    1153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-896338_kube-system(e32c4b2970efa8ef72e4afc8aa2f7038)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-896338" podUID="e32c4b2970efa8ef72e4afc8aa2f7038"
	Nov 19 02:34:46 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:34:46.004331    1153 scope.go:117] "RemoveContainer" containerID="138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3"
	Nov 19 02:34:46 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:46.004614    1153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-896338_kube-system(1d4af482de8ef1996b35bfa6adfca717)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-896338" podUID="1d4af482de8ef1996b35bfa6adfca717"
	Nov 19 02:34:49 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:49.641074    1153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-896338?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Nov 19 02:34:57 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:34:57.003860    1153 scope.go:117] "RemoveContainer" containerID="fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342"
	Nov 19 02:34:57 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:57.004017    1153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-896338_kube-system(e32c4b2970efa8ef72e4afc8aa2f7038)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-896338" podUID="e32c4b2970efa8ef72e4afc8aa2f7038"
	Nov 19 02:34:59 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:59.131850    1153 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-kubernetes-upgrade-896338.187947608e8d132c  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-kubernetes-upgrade-896338,UID:e32c4b2970efa8ef72e4afc8aa2f7038,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-kubernetes-upgrade-896338_kube-system(e32c4b2970efa8ef72e4afc8aa2f7038),Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-896338,},FirstTimestamp:2025-11-19 02:26:49.126302508 +0000 UTC m=+35.203774294,LastTimestamp:2025-11-19 02:26:54.7089
21663 +0000 UTC m=+40.786393456,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-896338,}"
	Nov 19 02:35:00 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:35:00.003928    1153 scope.go:117] "RemoveContainer" containerID="138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3"
	Nov 19 02:35:00 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:35:00.004146    1153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-896338_kube-system(1d4af482de8ef1996b35bfa6adfca717)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-896338" podUID="1d4af482de8ef1996b35bfa6adfca717"
	Nov 19 02:35:00 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:35:00.034585    1153 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-scheduler-kubernetes-upgrade-896338)" podUID="6e6aa192bc499077f7f5955d155982e2" pod="kube-system/kube-scheduler-kubernetes-upgrade-896338"
	Nov 19 02:35:01 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:35:01.003809    1153 kubelet.go:3202] "Trying to delete pod" pod="kube-system/etcd-kubernetes-upgrade-896338" podUID="eea50bd2-467d-40e3-ac23-12aa3fd98404"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-896338 -n kubernetes-upgrade-896338
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-896338 -n kubernetes-upgrade-896338: exit status 2 (13.841005364s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1119 02:35:15.959500  335426 status.go:466] Error apiserver status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[-]log failed: reason withheld
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-896338" apiserver is not running, skipping kubectl commands (state="Error")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-896338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-896338
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-896338: (2.879662899s)
--- FAIL: TestKubernetesUpgrade (595.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (14.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-691094 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [90639f81-cb90-45ed-a6f9-0112e27e5bcb] Pending
helpers_test.go:352: "busybox" [90639f81-cb90-45ed-a6f9-0112e27e5bcb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [90639f81-cb90-45ed-a6f9-0112e27e5bcb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003623659s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-691094 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-691094
helpers_test.go:243: (dbg) docker inspect old-k8s-version-691094:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "839df93ceb63d0d13317decf25c2c4eaccc915d4750cfa4a087069705153e5fd",
	        "Created": "2025-11-19T02:32:51.932562407Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304783,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:32:51.978861725Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/839df93ceb63d0d13317decf25c2c4eaccc915d4750cfa4a087069705153e5fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/839df93ceb63d0d13317decf25c2c4eaccc915d4750cfa4a087069705153e5fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/839df93ceb63d0d13317decf25c2c4eaccc915d4750cfa4a087069705153e5fd/hosts",
	        "LogPath": "/var/lib/docker/containers/839df93ceb63d0d13317decf25c2c4eaccc915d4750cfa4a087069705153e5fd/839df93ceb63d0d13317decf25c2c4eaccc915d4750cfa4a087069705153e5fd-json.log",
	        "Name": "/old-k8s-version-691094",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-691094:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-691094",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "839df93ceb63d0d13317decf25c2c4eaccc915d4750cfa4a087069705153e5fd",
	                "LowerDir": "/var/lib/docker/overlay2/d9d29317fe080187b4ac955f19c3b70929277bc9d433b324633c36af9102372e-init/diff:/var/lib/docker/overlay2/de7938e6a920c133c8c6b988444cfbf6706fdc6982445229ca70e2488a725edb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9d29317fe080187b4ac955f19c3b70929277bc9d433b324633c36af9102372e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9d29317fe080187b4ac955f19c3b70929277bc9d433b324633c36af9102372e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9d29317fe080187b4ac955f19c3b70929277bc9d433b324633c36af9102372e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-691094",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-691094/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-691094",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-691094",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-691094",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "25d822acf28637fa4cce4fc25c4664674f3bbb16e082b090f611dcae48313299",
	            "SandboxKey": "/var/run/docker/netns/25d822acf286",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-691094": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4c0ba205c5e4031c33eff77f74c332bc7353ce431fe839a2e2d9f73a15968b57",
	                    "EndpointID": "44f08eca5a7de78215b0c2d3109731c14bba8acc1511177c790890960c94d079",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "76:b6:e5:21:42:df",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-691094",
	                        "839df93ceb63"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-691094 -n old-k8s-version-691094
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-691094 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-691094 logs -n 25: (1.133039849s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-212776 sudo systemctl status kubelet --all --full --no-pager                                                                                       │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl cat kubelet --no-pager                                                                                                       │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                        │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/kubernetes/kubelet.conf                                                                                                       │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /var/lib/kubelet/config.yaml                                                                                                       │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status docker --all --full --no-pager                                                                                        │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat docker --no-pager                                                                                                        │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/docker/daemon.json                                                                                                            │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo docker system info                                                                                                                     │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl status cri-docker --all --full --no-pager                                                                                    │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat cri-docker --no-pager                                                                                                    │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                               │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                         │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cri-dockerd --version                                                                                                                  │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status containerd --all --full --no-pager                                                                                    │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl cat containerd --no-pager                                                                                                    │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /lib/systemd/system/containerd.service                                                                                             │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/containerd/config.toml                                                                                                        │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo containerd config dump                                                                                                                 │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status crio --all --full --no-pager                                                                                          │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat crio --no-pager                                                                                                          │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo crio config                                                                                                                            │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ delete  │ -p bridge-212776                                                                                                                                             │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ start   │ -p embed-certs-168452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ embed-certs-168452 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:33:19
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:33:19.818158  315363 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:33:19.818478  315363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:33:19.818490  315363 out.go:374] Setting ErrFile to fd 2...
	I1119 02:33:19.818495  315363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:33:19.818721  315363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:33:19.819330  315363 out.go:368] Setting JSON to false
	I1119 02:33:19.820616  315363 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4540,"bootTime":1763515060,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:33:19.820746  315363 start.go:143] virtualization: kvm guest
	I1119 02:33:19.822862  315363 out.go:179] * [embed-certs-168452] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:33:19.824498  315363 notify.go:221] Checking for updates...
	I1119 02:33:19.825083  315363 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:33:19.827189  315363 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:33:19.828628  315363 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:33:19.830282  315363 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	I1119 02:33:19.832156  315363 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:33:19.833558  315363 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:33:19.835289  315363 config.go:182] Loaded profile config "kubernetes-upgrade-896338": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:19.835456  315363 config.go:182] Loaded profile config "no-preload-483142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:19.835531  315363 config.go:182] Loaded profile config "old-k8s-version-691094": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 02:33:19.835628  315363 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:33:19.869670  315363 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:33:19.869754  315363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:33:19.948056  315363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-19 02:33:19.935291829 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:33:19.948230  315363 docker.go:319] overlay module found
	I1119 02:33:19.949713  315363 out.go:179] * Using the docker driver based on user configuration
	I1119 02:33:19.290831  301934 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:19.290855  301934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:33:19.290915  301934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:33:19.311399  301934 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:19.311423  301934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:33:19.311589  301934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:33:19.329209  301934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:33:19.348646  301934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:33:19.386878  301934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:33:19.430928  301934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:19.450594  301934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:19.476197  301934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:19.710133  301934 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1119 02:33:19.711417  301934 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-691094" to be "Ready" ...
	I1119 02:33:19.994360  301934 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:33:19.950788  315363 start.go:309] selected driver: docker
	I1119 02:33:19.950820  315363 start.go:930] validating driver "docker" against <nil>
	I1119 02:33:19.950835  315363 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:33:19.951688  315363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:33:20.027806  315363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-19 02:33:20.015781927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:33:20.028020  315363 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 02:33:20.028315  315363 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:33:20.030421  315363 out.go:179] * Using Docker driver with root privileges
	I1119 02:33:20.031895  315363 cni.go:84] Creating CNI manager for ""
	I1119 02:33:20.031986  315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:20.031997  315363 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:33:20.032066  315363 start.go:353] cluster config:
	{Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:33:20.034765  315363 out.go:179] * Starting "embed-certs-168452" primary control-plane node in "embed-certs-168452" cluster
	I1119 02:33:20.037487  315363 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 02:33:20.039029  315363 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:33:20.040485  315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:33:20.040520  315363 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1119 02:33:20.040528  315363 cache.go:65] Caching tarball of preloaded images
	I1119 02:33:20.040583  315363 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:33:20.040607  315363 preload.go:238] Found /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1119 02:33:20.040616  315363 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 02:33:20.040718  315363 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json ...
	I1119 02:33:20.040739  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json: {Name:mk2c1cb92572f9f7372f9d895b2c58b49c99bb3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:20.063579  315363 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:33:20.063610  315363 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:33:20.063636  315363 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:33:20.063670  315363 start.go:360] acquireMachinesLock for embed-certs-168452: {Name:mk4860299f8ff219c79992500844e49d455bd43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:33:20.063790  315363 start.go:364] duration metric: took 102.461µs to acquireMachinesLock for "embed-certs-168452"
	I1119 02:33:20.063835  315363 start.go:93] Provisioning new machine with config: &{Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:33:20.063944  315363 start.go:125] createHost starting for "" (driver="docker")
	I1119 02:33:19.995882  301934 addons.go:515] duration metric: took 741.418352ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:33:20.065989  315363 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 02:33:20.066193  315363 start.go:159] libmachine.API.Create for "embed-certs-168452" (driver="docker")
	I1119 02:33:20.066226  315363 client.go:173] LocalClient.Create starting
	I1119 02:33:20.066306  315363 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem
	I1119 02:33:20.066338  315363 main.go:143] libmachine: Decoding PEM data...
	I1119 02:33:20.066360  315363 main.go:143] libmachine: Parsing certificate...
	I1119 02:33:20.066438  315363 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem
	I1119 02:33:20.066464  315363 main.go:143] libmachine: Decoding PEM data...
	I1119 02:33:20.066475  315363 main.go:143] libmachine: Parsing certificate...
	I1119 02:33:20.066835  315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 02:33:20.087889  315363 cli_runner.go:211] docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 02:33:20.087987  315363 network_create.go:284] running [docker network inspect embed-certs-168452] to gather additional debugging logs...
	I1119 02:33:20.088020  315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452
	W1119 02:33:20.108512  315363 cli_runner.go:211] docker network inspect embed-certs-168452 returned with exit code 1
	I1119 02:33:20.108553  315363 network_create.go:287] error running [docker network inspect embed-certs-168452]: docker network inspect embed-certs-168452: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-168452 not found
	I1119 02:33:20.108577  315363 network_create.go:289] output of [docker network inspect embed-certs-168452]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-168452 not found
	
	** /stderr **
	I1119 02:33:20.108677  315363 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:33:20.129985  315363 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ed39016f2aa9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:16:a0:62:5a:51} reservation:<nil>}
	I1119 02:33:20.130640  315363 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-42b0c19d513b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b2:bf:ca:ce:21:95} reservation:<nil>}
	I1119 02:33:20.131454  315363 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-002e39e6dc05 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:8e:34:24:50:a5} reservation:<nil>}
	I1119 02:33:20.132210  315363 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c1155ea75a94 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:76:37:ad:5a:d8:36} reservation:<nil>}
	I1119 02:33:20.133253  315363 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-3ec6f45a7001 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:12:9a:69:49:8b:1f} reservation:<nil>}
	I1119 02:33:20.134343  315363 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ddf580}
	I1119 02:33:20.134393  315363 network_create.go:124] attempt to create docker network embed-certs-168452 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1119 02:33:20.134459  315363 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-168452 embed-certs-168452
	I1119 02:33:20.192566  315363 network_create.go:108] docker network embed-certs-168452 192.168.94.0/24 created
	I1119 02:33:20.192597  315363 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-168452" container
	I1119 02:33:20.192665  315363 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 02:33:20.216991  315363 cli_runner.go:164] Run: docker volume create embed-certs-168452 --label name.minikube.sigs.k8s.io=embed-certs-168452 --label created_by.minikube.sigs.k8s.io=true
	I1119 02:33:20.240868  315363 oci.go:103] Successfully created a docker volume embed-certs-168452
	I1119 02:33:20.240948  315363 cli_runner.go:164] Run: docker run --rm --name embed-certs-168452-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-168452 --entrypoint /usr/bin/test -v embed-certs-168452:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 02:33:20.653772  315363 oci.go:107] Successfully prepared a docker volume embed-certs-168452
	I1119 02:33:20.653851  315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:33:20.653866  315363 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 02:33:20.653963  315363 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-168452:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 02:33:20.215687  301934 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-691094" context rescaled to 1 replicas
	W1119 02:33:21.715210  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	W1119 02:33:24.323644  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	I1119 02:33:28.147893  307222 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:33:28.147982  307222 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:33:28.148104  307222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:33:28.148201  307222 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:33:28.148256  307222 kubeadm.go:319] OS: Linux
	I1119 02:33:28.148332  307222 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:33:28.148450  307222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:33:28.148522  307222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:33:28.148596  307222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:33:28.148672  307222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:33:28.148756  307222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:33:28.148841  307222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:33:28.148915  307222 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:33:28.149019  307222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:33:28.149159  307222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:33:28.149311  307222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:33:28.149421  307222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:33:28.151537  307222 out.go:252]   - Generating certificates and keys ...
	I1119 02:33:28.151647  307222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:33:28.151774  307222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:33:28.151834  307222 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:33:28.151902  307222 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:33:28.152000  307222 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:33:28.152068  307222 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:33:28.152179  307222 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:33:28.152343  307222 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-483142] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:33:28.152451  307222 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:33:28.152598  307222 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-483142] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:33:28.152690  307222 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:33:28.152796  307222 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:33:28.152837  307222 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:33:28.152894  307222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:33:28.152945  307222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:33:28.153002  307222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:33:28.153051  307222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:33:28.153118  307222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:33:28.153171  307222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:33:28.153255  307222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:33:28.153358  307222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:33:28.154609  307222 out.go:252]   - Booting up control plane ...
	I1119 02:33:28.154709  307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:33:28.154821  307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:33:28.154904  307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:33:28.155033  307222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:33:28.155173  307222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:33:28.155323  307222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:33:28.155456  307222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:33:28.155501  307222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:33:28.155631  307222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:33:28.155728  307222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:33:28.155805  307222 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001464049s
	I1119 02:33:28.155906  307222 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:33:28.156017  307222 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 02:33:28.156135  307222 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:33:28.156242  307222 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:33:28.156335  307222 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.319882231s
	I1119 02:33:28.156456  307222 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.432703999s
	I1119 02:33:28.156560  307222 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001475545s
	I1119 02:33:28.156685  307222 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:33:28.156832  307222 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:33:28.156917  307222 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:33:28.157202  307222 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-483142 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:33:28.157272  307222 kubeadm.go:319] [bootstrap-token] Using token: nwrx92.0c942uuundzydmcz
	I1119 02:33:28.159046  307222 out.go:252]   - Configuring RBAC rules ...
	I1119 02:33:28.159207  307222 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:33:28.159328  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:33:28.159549  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:33:28.159720  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:33:28.159922  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:33:28.160077  307222 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:33:28.160254  307222 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:33:28.160329  307222 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:33:28.160427  307222 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:33:28.160443  307222 kubeadm.go:319] 
	I1119 02:33:28.160527  307222 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:33:28.160536  307222 kubeadm.go:319] 
	I1119 02:33:28.160603  307222 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:33:28.160610  307222 kubeadm.go:319] 
	I1119 02:33:28.160642  307222 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:33:28.160730  307222 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:33:28.160832  307222 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:33:28.160845  307222 kubeadm.go:319] 
	I1119 02:33:28.160922  307222 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:33:28.160942  307222 kubeadm.go:319] 
	I1119 02:33:28.161016  307222 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:33:28.161031  307222 kubeadm.go:319] 
	I1119 02:33:28.161114  307222 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:33:28.161229  307222 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:33:28.161347  307222 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:33:28.161359  307222 kubeadm.go:319] 
	I1119 02:33:28.161531  307222 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:33:28.161656  307222 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:33:28.161665  307222 kubeadm.go:319] 
	I1119 02:33:28.161797  307222 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token nwrx92.0c942uuundzydmcz \
	I1119 02:33:28.161968  307222 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a \
	I1119 02:33:28.162022  307222 kubeadm.go:319] 	--control-plane 
	I1119 02:33:28.162036  307222 kubeadm.go:319] 
	I1119 02:33:28.162163  307222 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:33:28.162174  307222 kubeadm.go:319] 
	I1119 02:33:28.162301  307222 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token nwrx92.0c942uuundzydmcz \
	I1119 02:33:28.162456  307222 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a 
	I1119 02:33:28.162469  307222 cni.go:84] Creating CNI manager for ""
	I1119 02:33:28.162475  307222 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:28.164382  307222 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:33:25.786283  315363 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-168452:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.132274902s)
	I1119 02:33:25.786322  315363 kic.go:203] duration metric: took 5.132452147s to extract preloaded images to volume ...
	W1119 02:33:25.786460  315363 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 02:33:25.786504  315363 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 02:33:25.786554  315363 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:33:25.853413  315363 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-168452 --name embed-certs-168452 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-168452 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-168452 --network embed-certs-168452 --ip 192.168.94.2 --volume embed-certs-168452:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:33:26.238651  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Running}}
	I1119 02:33:26.261169  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:26.284313  315363 cli_runner.go:164] Run: docker exec embed-certs-168452 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:33:26.336955  315363 oci.go:144] the created container "embed-certs-168452" has a running status.
	I1119 02:33:26.336985  315363 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa...
	I1119 02:33:26.484310  315363 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:33:26.517116  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:26.542901  315363 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:33:26.542925  315363 kic_runner.go:114] Args: [docker exec --privileged embed-certs-168452 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:33:26.595205  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:26.623359  315363 machine.go:94] provisionDockerMachine start ...
	I1119 02:33:26.623527  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:26.646254  315363 main.go:143] libmachine: Using SSH client type: native
	I1119 02:33:26.646550  315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1119 02:33:26.646569  315363 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:33:26.799221  315363 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-168452
	
	I1119 02:33:26.799250  315363 ubuntu.go:182] provisioning hostname "embed-certs-168452"
	I1119 02:33:26.799334  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:26.820929  315363 main.go:143] libmachine: Using SSH client type: native
	I1119 02:33:26.821188  315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1119 02:33:26.821210  315363 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-168452 && echo "embed-certs-168452" | sudo tee /etc/hostname
	I1119 02:33:26.966035  315363 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-168452
	
	I1119 02:33:26.966125  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:26.985276  315363 main.go:143] libmachine: Using SSH client type: native
	I1119 02:33:26.985598  315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1119 02:33:26.985633  315363 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-168452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-168452/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-168452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:33:27.121670  315363 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:33:27.121703  315363 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11107/.minikube}
	I1119 02:33:27.121727  315363 ubuntu.go:190] setting up certificates
	I1119 02:33:27.123000  315363 provision.go:84] configureAuth start
	I1119 02:33:27.123195  315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
	I1119 02:33:27.143490  315363 provision.go:143] copyHostCerts
	I1119 02:33:27.143570  315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem, removing ...
	I1119 02:33:27.143580  315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem
	I1119 02:33:27.143645  315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem (1082 bytes)
	I1119 02:33:27.143736  315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem, removing ...
	I1119 02:33:27.143744  315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem
	I1119 02:33:27.143773  315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem (1123 bytes)
	I1119 02:33:27.143829  315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem, removing ...
	I1119 02:33:27.143835  315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem
	I1119 02:33:27.143858  315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem (1675 bytes)
	I1119 02:33:27.143923  315363 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem org=jenkins.embed-certs-168452 san=[127.0.0.1 192.168.94.2 embed-certs-168452 localhost minikube]
	I1119 02:33:27.239080  315363 provision.go:177] copyRemoteCerts
	I1119 02:33:27.239165  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:33:27.239198  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.262397  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.362967  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:33:27.387666  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1119 02:33:27.418735  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:33:27.446098  315363 provision.go:87] duration metric: took 323.082791ms to configureAuth
	I1119 02:33:27.446129  315363 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:33:27.446313  315363 config.go:182] Loaded profile config "embed-certs-168452": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:27.446327  315363 machine.go:97] duration metric: took 822.891862ms to provisionDockerMachine
	I1119 02:33:27.446333  315363 client.go:176] duration metric: took 7.38010166s to LocalClient.Create
	I1119 02:33:27.446351  315363 start.go:167] duration metric: took 7.380160884s to libmachine.API.Create "embed-certs-168452"
	I1119 02:33:27.446358  315363 start.go:293] postStartSetup for "embed-certs-168452" (driver="docker")
	I1119 02:33:27.446409  315363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:33:27.446465  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:33:27.446501  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.470807  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.575097  315363 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:33:27.580067  315363 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:33:27.580102  315363 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:33:27.580115  315363 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/addons for local assets ...
	I1119 02:33:27.580188  315363 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/files for local assets ...
	I1119 02:33:27.580303  315363 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem -> 146572.pem in /etc/ssl/certs
	I1119 02:33:27.580434  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:33:27.588848  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:33:27.611498  315363 start.go:296] duration metric: took 165.12815ms for postStartSetup
	I1119 02:33:27.611895  315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
	I1119 02:33:27.630987  315363 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json ...
	I1119 02:33:27.631276  315363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:33:27.631327  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.650599  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.747119  315363 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:33:27.752242  315363 start.go:128] duration metric: took 7.68828048s to createHost
	I1119 02:33:27.752270  315363 start.go:83] releasing machines lock for "embed-certs-168452", held for 7.688466151s
	I1119 02:33:27.752448  315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
	I1119 02:33:27.772595  315363 ssh_runner.go:195] Run: cat /version.json
	I1119 02:33:27.772634  315363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:33:27.772668  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.772695  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.795020  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.795311  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.889466  315363 ssh_runner.go:195] Run: systemctl --version
	I1119 02:33:27.948057  315363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:33:27.953270  315363 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:33:27.953328  315363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:33:27.979962  315363 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 02:33:27.979983  315363 start.go:496] detecting cgroup driver to use...
	I1119 02:33:27.980013  315363 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:33:27.980050  315363 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 02:33:27.995148  315363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 02:33:28.009176  315363 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:33:28.009239  315363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:33:28.028120  315363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:33:28.047654  315363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:33:28.137742  315363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:33:28.233503  315363 docker.go:234] disabling docker service ...
	I1119 02:33:28.233569  315363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:33:28.254546  315363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:33:28.270970  315363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:33:28.372358  315363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:33:28.475816  315363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:33:28.494447  315363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:33:28.514112  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 02:33:28.528713  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 02:33:28.542307  315363 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 02:33:28.542395  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 02:33:28.553682  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:33:28.564425  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 02:33:28.574563  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:33:28.585047  315363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:33:28.594876  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 02:33:28.606066  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 02:33:28.616549  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 02:33:28.627283  315363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:33:28.635846  315363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:33:28.643854  315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:28.727138  315363 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 02:33:28.825075  315363 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 02:33:28.825141  315363 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 02:33:28.829886  315363 start.go:564] Will wait 60s for crictl version
	I1119 02:33:28.829954  315363 ssh_runner.go:195] Run: which crictl
	I1119 02:33:28.834062  315363 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:33:28.859386  315363 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 02:33:28.859454  315363 ssh_runner.go:195] Run: containerd --version
	I1119 02:33:28.881932  315363 ssh_runner.go:195] Run: containerd --version
	I1119 02:33:28.905418  315363 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 02:33:28.906851  315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:33:28.925576  315363 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1119 02:33:28.930043  315363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:33:28.941472  315363 kubeadm.go:884] updating cluster {Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:33:28.941570  315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:33:28.941633  315363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:33:28.969084  315363 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:33:28.969102  315363 containerd.go:534] Images already preloaded, skipping extraction
	I1119 02:33:28.969159  315363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:33:28.994529  315363 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:33:28.994549  315363 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:33:28.994556  315363 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
	I1119 02:33:28.994637  315363 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-168452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:33:28.994694  315363 ssh_runner.go:195] Run: sudo crictl info
	I1119 02:33:29.023174  315363 cni.go:84] Creating CNI manager for ""
	I1119 02:33:29.023197  315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:29.023211  315363 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:33:29.023232  315363 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-168452 NodeName:embed-certs-168452 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:33:29.023337  315363 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-168452"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:33:29.023423  315363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:33:29.032358  315363 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:33:29.032438  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:33:29.041206  315363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1119 02:33:29.056159  315363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:33:29.074583  315363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1119 02:33:29.089316  315363 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:33:29.093854  315363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:33:29.106602  315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:29.193818  315363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:29.220027  315363 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452 for IP: 192.168.94.2
	I1119 02:33:29.220053  315363 certs.go:195] generating shared ca certs ...
	I1119 02:33:29.220075  315363 certs.go:227] acquiring lock for ca certs: {Name:mk11d6789b2333e17b3937495b501fbcca15c242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.220231  315363 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key
	I1119 02:33:29.220278  315363 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key
	I1119 02:33:29.220287  315363 certs.go:257] generating profile certs ...
	I1119 02:33:29.220334  315363 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key
	I1119 02:33:29.220351  315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt with IP's: []
	I1119 02:33:29.496773  315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt ...
	I1119 02:33:29.496800  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt: {Name:mkdb5e24f9c8b0d3d9849ba91ac24e28be0abdf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.496993  315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key ...
	I1119 02:33:29.497006  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key: {Name:mk5aa88fe9180cc5f94c07d5a968428b4ccf37cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.497088  315363 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2
	I1119 02:33:29.497102  315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	W1119 02:33:26.721525  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	W1119 02:33:29.215940  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	I1119 02:33:28.165835  307222 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:33:28.176028  307222 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:33:28.176052  307222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:33:28.195615  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:33:28.450816  307222 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:33:28.450899  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:28.450933  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-483142 minikube.k8s.io/updated_at=2025_11_19T02_33_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=no-preload-483142 minikube.k8s.io/primary=true
	I1119 02:33:28.538275  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:28.538445  307222 ops.go:34] apiserver oom_adj: -16
	I1119 02:33:29.038968  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:29.539224  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:30.038530  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:30.539271  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:31.038434  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:31.538496  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:32.038945  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:32.539001  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:33.038571  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:33.129034  307222 kubeadm.go:1114] duration metric: took 4.678195875s to wait for elevateKubeSystemPrivileges
	I1119 02:33:33.129095  307222 kubeadm.go:403] duration metric: took 17.40558167s to StartCluster
	I1119 02:33:33.129119  307222 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:33.129202  307222 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:33:33.131182  307222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:33.131481  307222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:33:33.131519  307222 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:33:33.131585  307222 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:33:33.131706  307222 addons.go:70] Setting storage-provisioner=true in profile "no-preload-483142"
	I1119 02:33:33.131748  307222 config.go:182] Loaded profile config "no-preload-483142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:33.131794  307222 addons.go:70] Setting default-storageclass=true in profile "no-preload-483142"
	I1119 02:33:33.131827  307222 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-483142"
	I1119 02:33:33.131810  307222 addons.go:239] Setting addon storage-provisioner=true in "no-preload-483142"
	I1119 02:33:33.131959  307222 host.go:66] Checking if "no-preload-483142" exists ...
	I1119 02:33:33.132200  307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
	I1119 02:33:33.132480  307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
	I1119 02:33:33.134152  307222 out.go:179] * Verifying Kubernetes components...
	I1119 02:33:33.135585  307222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:33.159834  307222 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:33:33.160479  307222 addons.go:239] Setting addon default-storageclass=true in "no-preload-483142"
	I1119 02:33:33.160545  307222 host.go:66] Checking if "no-preload-483142" exists ...
	I1119 02:33:33.161017  307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
	I1119 02:33:33.161390  307222 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:33.161410  307222 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:33:33.161458  307222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-483142
	I1119 02:33:33.198354  307222 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:33.198390  307222 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:33:33.198448  307222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-483142
	I1119 02:33:33.198522  307222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/no-preload-483142/id_rsa Username:docker}
	I1119 02:33:33.223657  307222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/no-preload-483142/id_rsa Username:docker}
	I1119 02:33:33.248952  307222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:33:33.322673  307222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:33.348662  307222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:33.354901  307222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:33.503051  307222 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 02:33:33.504327  307222 node_ready.go:35] waiting up to 6m0s for node "no-preload-483142" to be "Ready" ...
	I1119 02:33:33.756829  307222 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:33:29.844643  315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 ...
	I1119 02:33:29.844667  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2: {Name:mk1596cf7137a998e277abf94c4c839907009a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.844872  315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2 ...
	I1119 02:33:29.844901  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2: {Name:mk9d817ab63555ebb02e0590916ce23352cf008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.845022  315363 certs.go:382] copying /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 -> /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt
	I1119 02:33:29.845144  315363 certs.go:386] copying /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2 -> /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key
	I1119 02:33:29.845239  315363 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key
	I1119 02:33:29.845260  315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt with IP's: []
	I1119 02:33:30.013529  315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt ...
	I1119 02:33:30.013564  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt: {Name:mka683634a30ab1845434f0fc49f75059694b447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:30.013775  315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key ...
	I1119 02:33:30.013796  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key: {Name:mk9e8dbde74fbcae5bb0e966570ae4f43c6f07e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:30.014054  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem (1338 bytes)
	W1119 02:33:30.014108  315363 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657_empty.pem, impossibly tiny 0 bytes
	I1119 02:33:30.014124  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 02:33:30.014183  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:33:30.014219  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:33:30.014257  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem (1675 bytes)
	I1119 02:33:30.014318  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:33:30.014986  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:33:30.034798  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 02:33:30.054155  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:33:30.074272  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:33:30.094396  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 02:33:30.114605  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:33:30.133991  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:33:30.153105  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 02:33:30.172052  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /usr/share/ca-certificates/146572.pem (1708 bytes)
	I1119 02:33:30.194139  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:33:30.212546  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem --> /usr/share/ca-certificates/14657.pem (1338 bytes)
	I1119 02:33:30.231534  315363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:33:30.246493  315363 ssh_runner.go:195] Run: openssl version
	I1119 02:33:30.252586  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146572.pem && ln -fs /usr/share/ca-certificates/146572.pem /etc/ssl/certs/146572.pem"
	I1119 02:33:30.261620  315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146572.pem
	I1119 02:33:30.265824  315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146572.pem
	I1119 02:33:30.265886  315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146572.pem
	I1119 02:33:30.301164  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146572.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:33:30.310429  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:33:30.319818  315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:33:30.323998  315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:57 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:33:30.324046  315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:33:30.360567  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:33:30.370492  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14657.pem && ln -fs /usr/share/ca-certificates/14657.pem /etc/ssl/certs/14657.pem"
	I1119 02:33:30.380695  315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14657.pem
	I1119 02:33:30.385171  315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14657.pem
	I1119 02:33:30.385241  315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14657.pem
	I1119 02:33:30.422375  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14657.pem /etc/ssl/certs/51391683.0"
	I1119 02:33:30.432329  315363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:33:30.436333  315363 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:33:30.436432  315363 kubeadm.go:401] StartCluster: {Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:33:30.436494  315363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 02:33:30.436588  315363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:33:30.465191  315363 cri.go:89] found id: ""
	I1119 02:33:30.465255  315363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:33:30.474328  315363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:33:30.483132  315363 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:33:30.483196  315363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:33:30.491249  315363 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:33:30.491272  315363 kubeadm.go:158] found existing configuration files:
	
	I1119 02:33:30.491320  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:33:30.499072  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:33:30.499140  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:33:30.507018  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:33:30.514836  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:33:30.514890  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:33:30.523396  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:33:30.532721  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:33:30.532772  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:33:30.541409  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:33:30.550090  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:33:30.550157  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:33:30.558693  315363 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:33:30.636057  315363 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:33:30.702518  315363 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1119 02:33:31.715333  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	W1119 02:33:33.715963  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	I1119 02:33:34.216972  301934 node_ready.go:49] node "old-k8s-version-691094" is "Ready"
	I1119 02:33:34.217010  301934 node_ready.go:38] duration metric: took 14.505569399s for node "old-k8s-version-691094" to be "Ready" ...
	I1119 02:33:34.217027  301934 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:33:34.217083  301934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:33:34.235995  301934 api_server.go:72] duration metric: took 14.98160502s to wait for apiserver process to appear ...
	I1119 02:33:34.236024  301934 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:33:34.236046  301934 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 02:33:34.242612  301934 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1119 02:33:34.244469  301934 api_server.go:141] control plane version: v1.28.0
	I1119 02:33:34.244501  301934 api_server.go:131] duration metric: took 8.468136ms to wait for apiserver health ...
	I1119 02:33:34.244512  301934 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:33:34.249250  301934 system_pods.go:59] 8 kube-system pods found
	I1119 02:33:34.249293  301934 system_pods.go:61] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.249301  301934 system_pods.go:61] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.249308  301934 system_pods.go:61] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.249326  301934 system_pods.go:61] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.249331  301934 system_pods.go:61] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.249336  301934 system_pods.go:61] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.249340  301934 system_pods.go:61] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.249347  301934 system_pods.go:61] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.249389  301934 system_pods.go:74] duration metric: took 4.842718ms to wait for pod list to return data ...
	I1119 02:33:34.249403  301934 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:33:34.251979  301934 default_sa.go:45] found service account: "default"
	I1119 02:33:34.252000  301934 default_sa.go:55] duration metric: took 2.59102ms for default service account to be created ...
	I1119 02:33:34.252008  301934 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:33:34.256098  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:34.256141  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.256148  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.256155  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.256158  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.256163  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.256166  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.256169  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.256173  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.256204  301934 retry.go:31] will retry after 294.08163ms: missing components: kube-dns
	I1119 02:33:34.555117  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:34.555149  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.555155  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.555160  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.555164  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.555168  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.555171  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.555174  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.555181  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.555200  301934 retry.go:31] will retry after 239.208285ms: missing components: kube-dns
	I1119 02:33:34.801314  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:34.801356  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.801397  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.801408  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.801414  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.801421  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.801426  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.801432  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.801446  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.801465  301934 retry.go:31] will retry after 406.320974ms: missing components: kube-dns
	I1119 02:33:33.758898  307222 addons.go:515] duration metric: took 627.311179ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:33:34.007122  307222 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-483142" context rescaled to 1 replicas
	W1119 02:33:35.507777  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	I1119 02:33:35.212153  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:35.212193  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:35.212202  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:35.212208  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:35.212214  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:35.212221  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:35.212226  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:35.212230  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:35.212235  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Running
	I1119 02:33:35.212252  301934 retry.go:31] will retry after 502.533324ms: missing components: kube-dns
	I1119 02:33:35.719172  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:35.719211  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Running
	I1119 02:33:35.719220  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:35.719225  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:35.719231  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:35.719238  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:35.719243  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:35.719248  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:35.719254  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Running
	I1119 02:33:35.719267  301934 system_pods.go:126] duration metric: took 1.46725409s to wait for k8s-apps to be running ...
	I1119 02:33:35.719280  301934 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:33:35.719333  301934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:33:35.733944  301934 system_svc.go:56] duration metric: took 14.654804ms WaitForService to wait for kubelet
	I1119 02:33:35.733974  301934 kubeadm.go:587] duration metric: took 16.479589704s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:33:35.733994  301934 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:33:35.736881  301934 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:33:35.736904  301934 node_conditions.go:123] node cpu capacity is 8
	I1119 02:33:35.736917  301934 node_conditions.go:105] duration metric: took 2.917087ms to run NodePressure ...
	I1119 02:33:35.736947  301934 start.go:242] waiting for startup goroutines ...
	I1119 02:33:35.736956  301934 start.go:247] waiting for cluster config update ...
	I1119 02:33:35.736966  301934 start.go:256] writing updated cluster config ...
	I1119 02:33:35.737252  301934 ssh_runner.go:195] Run: rm -f paused
	I1119 02:33:35.741706  301934 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:33:35.746693  301934 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-bbvqz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.751796  301934 pod_ready.go:94] pod "coredns-5dd5756b68-bbvqz" is "Ready"
	I1119 02:33:35.751821  301934 pod_ready.go:86] duration metric: took 5.102077ms for pod "coredns-5dd5756b68-bbvqz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.754811  301934 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.759826  301934 pod_ready.go:94] pod "etcd-old-k8s-version-691094" is "Ready"
	I1119 02:33:35.759852  301934 pod_ready.go:86] duration metric: took 5.017899ms for pod "etcd-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.763701  301934 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.768670  301934 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-691094" is "Ready"
	I1119 02:33:35.768693  301934 pod_ready.go:86] duration metric: took 4.969901ms for pod "kube-apiserver-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.772227  301934 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.146684  301934 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-691094" is "Ready"
	I1119 02:33:36.146718  301934 pod_ready.go:86] duration metric: took 374.468133ms for pod "kube-controller-manager-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.347472  301934 pod_ready.go:83] waiting for pod "kube-proxy-79df5" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.746791  301934 pod_ready.go:94] pod "kube-proxy-79df5" is "Ready"
	I1119 02:33:36.746855  301934 pod_ready.go:86] duration metric: took 399.347819ms for pod "kube-proxy-79df5" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.946961  301934 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:37.347059  301934 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-691094" is "Ready"
	I1119 02:33:37.347090  301934 pod_ready.go:86] duration metric: took 400.10454ms for pod "kube-scheduler-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:37.347108  301934 pod_ready.go:40] duration metric: took 1.605370699s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:33:37.406793  301934 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1119 02:33:37.408685  301934 out.go:203] 
	W1119 02:33:37.410052  301934 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 02:33:37.411691  301934 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 02:33:37.413481  301934 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-691094" cluster and "default" namespace by default
	W1119 02:33:37.511440  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	W1119 02:33:40.007282  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	I1119 02:33:42.519187  315363 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:33:42.519270  315363 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:33:42.519471  315363 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:33:42.519558  315363 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:33:42.519641  315363 kubeadm.go:319] OS: Linux
	I1119 02:33:42.519723  315363 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:33:42.519793  315363 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:33:42.519863  315363 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:33:42.519937  315363 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:33:42.520011  315363 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:33:42.520082  315363 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:33:42.520161  315363 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:33:42.520246  315363 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:33:42.520396  315363 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:33:42.520528  315363 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:33:42.520640  315363 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:33:42.520739  315363 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:33:42.522619  315363 out.go:252]   - Generating certificates and keys ...
	I1119 02:33:42.522717  315363 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:33:42.522778  315363 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:33:42.522841  315363 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:33:42.522898  315363 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:33:42.522948  315363 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:33:42.522986  315363 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:33:42.523065  315363 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:33:42.523231  315363 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-168452 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1119 02:33:42.523301  315363 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:33:42.523451  315363 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-168452 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1119 02:33:42.523527  315363 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:33:42.523599  315363 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:33:42.523658  315363 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:33:42.523737  315363 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:33:42.523787  315363 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:33:42.523833  315363 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:33:42.523879  315363 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:33:42.523945  315363 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:33:42.524004  315363 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:33:42.524082  315363 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:33:42.524137  315363 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:33:42.525751  315363 out.go:252]   - Booting up control plane ...
	I1119 02:33:42.525831  315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:33:42.525893  315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:33:42.525997  315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:33:42.526121  315363 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:33:42.526235  315363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:33:42.526323  315363 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:33:42.526401  315363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:33:42.526441  315363 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:33:42.526546  315363 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:33:42.526633  315363 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:33:42.526684  315363 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001668097s
	I1119 02:33:42.526759  315363 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:33:42.526828  315363 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1119 02:33:42.526912  315363 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:33:42.526979  315363 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:33:42.527060  315363 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.143588684s
	I1119 02:33:42.527116  315363 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.751163591s
	I1119 02:33:42.527185  315363 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002351229s
	I1119 02:33:42.527279  315363 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:33:42.527418  315363 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:33:42.527475  315363 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:33:42.527642  315363 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-168452 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:33:42.527698  315363 kubeadm.go:319] [bootstrap-token] Using token: f9q4qi.t8dfm2zfbs2z2sgs
	I1119 02:33:42.529100  315363 out.go:252]   - Configuring RBAC rules ...
	I1119 02:33:42.529232  315363 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:33:42.529348  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:33:42.529576  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:33:42.529779  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:33:42.529949  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:33:42.530070  315363 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:33:42.530217  315363 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:33:42.530321  315363 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:33:42.530403  315363 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:33:42.530413  315363 kubeadm.go:319] 
	I1119 02:33:42.530492  315363 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:33:42.530502  315363 kubeadm.go:319] 
	I1119 02:33:42.530604  315363 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:33:42.530618  315363 kubeadm.go:319] 
	I1119 02:33:42.530647  315363 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:33:42.530726  315363 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:33:42.530797  315363 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:33:42.530809  315363 kubeadm.go:319] 
	I1119 02:33:42.530880  315363 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:33:42.530885  315363 kubeadm.go:319] 
	I1119 02:33:42.530954  315363 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:33:42.530981  315363 kubeadm.go:319] 
	I1119 02:33:42.531052  315363 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:33:42.531164  315363 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:33:42.531261  315363 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:33:42.531271  315363 kubeadm.go:319] 
	I1119 02:33:42.531424  315363 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:33:42.531551  315363 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:33:42.531570  315363 kubeadm.go:319] 
	I1119 02:33:42.531690  315363 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token f9q4qi.t8dfm2zfbs2z2sgs \
	I1119 02:33:42.531850  315363 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a \
	I1119 02:33:42.531878  315363 kubeadm.go:319] 	--control-plane 
	I1119 02:33:42.531885  315363 kubeadm.go:319] 
	I1119 02:33:42.531966  315363 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:33:42.531972  315363 kubeadm.go:319] 
	I1119 02:33:42.532046  315363 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f9q4qi.t8dfm2zfbs2z2sgs \
	I1119 02:33:42.532149  315363 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a 
	I1119 02:33:42.532161  315363 cni.go:84] Creating CNI manager for ""
	I1119 02:33:42.532167  315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:42.535194  315363 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:33:42.536650  315363 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:33:42.541710  315363 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:33:42.541734  315363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:33:42.556040  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:33:42.817018  315363 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:33:42.817147  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:42.817217  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-168452 minikube.k8s.io/updated_at=2025_11_19T02_33_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=embed-certs-168452 minikube.k8s.io/primary=true
	I1119 02:33:42.828812  315363 ops.go:34] apiserver oom_adj: -16
	I1119 02:33:42.896633  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:43.396810  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:43.896801  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:44.397677  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1119 02:33:46.450455  208368 system_pods.go:55] pod list returned error: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
	I1119 02:33:46.452233  208368 out.go:203] 
	W1119 02:33:46.453522  208368 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for system pods: apiserver never returned a pod list
	W1119 02:33:46.453544  208368 out.go:285] * 
	W1119 02:33:46.455831  208368 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:33:46.457044  208368 out.go:203] 
	W1119 02:33:42.007484  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	W1119 02:33:44.007813  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	W1119 02:33:46.008192  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	I1119 02:33:44.897377  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:45.397137  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:45.897616  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:46.397448  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:46.896710  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:47.397632  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:47.897150  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:48.003028  315363 kubeadm.go:1114] duration metric: took 5.18596901s to wait for elevateKubeSystemPrivileges
	I1119 02:33:48.003056  315363 kubeadm.go:403] duration metric: took 17.566632128s to StartCluster
	I1119 02:33:48.003071  315363 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:48.003125  315363 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:33:48.005668  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:48.005964  315363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:33:48.005984  315363 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:33:48.006098  315363 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:33:48.006191  315363 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-168452"
	I1119 02:33:48.006211  315363 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-168452"
	I1119 02:33:48.006209  315363 addons.go:70] Setting default-storageclass=true in profile "embed-certs-168452"
	I1119 02:33:48.006218  315363 config.go:182] Loaded profile config "embed-certs-168452": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:48.006231  315363 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-168452"
	I1119 02:33:48.006249  315363 host.go:66] Checking if "embed-certs-168452" exists ...
	I1119 02:33:48.006692  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:48.006819  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:48.007901  315363 out.go:179] * Verifying Kubernetes components...
	I1119 02:33:48.009142  315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:48.032568  315363 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	365ce1a4e43ef       56cc512116c8f       8 seconds ago       Running             busybox                   0                   84edcb21162b2       busybox                                          default
	1e139eec825de       ead0a4a53df89       14 seconds ago      Running             coredns                   0                   06ddb433194da       coredns-5dd5756b68-bbvqz                         kube-system
	e773989cb5b97       6e38f40d628db       14 seconds ago      Running             storage-provisioner       0                   600c711a387b1       storage-provisioner                              kube-system
	dda3cde60adce       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   041636111f700       kindnet-b9cwh                                    kube-system
	5dde09d6b5534       ea1030da44aa1       29 seconds ago      Running             kube-proxy                0                   03988ca85cd54       kube-proxy-79df5                                 kube-system
	ae40aa345e79c       4be79c38a4bab       47 seconds ago      Running             kube-controller-manager   0                   12de271987c00       kube-controller-manager-old-k8s-version-691094   kube-system
	b77b79fa6a466       f6f496300a2ae       47 seconds ago      Running             kube-scheduler            0                   6f3bdd55a5e5d       kube-scheduler-old-k8s-version-691094            kube-system
	dbc14fc0cc43a       73deb9a3f7025       47 seconds ago      Running             etcd                      0                   725875976c48d       etcd-old-k8s-version-691094                      kube-system
	2710c5af3eee6       bb5e0dde9054c       47 seconds ago      Running             kube-apiserver            0                   d1a4659f2bb05       kube-apiserver-old-k8s-version-691094            kube-system
	
	
	==> containerd <==
	Nov 19 02:33:34 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:34.230807866Z" level=info msg="StartContainer for \"e773989cb5b9719f34c18e9670a458f821a72e3b0c1f48c1667978ae16fa12a4\""
	Nov 19 02:33:34 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:34.232491388Z" level=info msg="connecting to shim e773989cb5b9719f34c18e9670a458f821a72e3b0c1f48c1667978ae16fa12a4" address="unix:///run/containerd/s/24b671fd6ae6e1c46e5997e6e8fbc89d9c643c0b983828d1d7f18ff2d3ba023f" protocol=ttrpc version=3
	Nov 19 02:33:34 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:34.234021335Z" level=info msg="Container 1e139eec825de9114abd6701b9ab42ee2b8ab9b766ece6ead08550a8ad647722: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 02:33:34 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:34.243041735Z" level=info msg="CreateContainer within sandbox \"06ddb433194dae11f9f24856f079619dc43b22d6efbf43415d290df94aba9325\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e139eec825de9114abd6701b9ab42ee2b8ab9b766ece6ead08550a8ad647722\""
	Nov 19 02:33:34 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:34.244135610Z" level=info msg="StartContainer for \"1e139eec825de9114abd6701b9ab42ee2b8ab9b766ece6ead08550a8ad647722\""
	Nov 19 02:33:34 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:34.245331761Z" level=info msg="connecting to shim 1e139eec825de9114abd6701b9ab42ee2b8ab9b766ece6ead08550a8ad647722" address="unix:///run/containerd/s/03135e70f496d3ee336cea3910b2f223365bccd66022f2def8a237460898a081" protocol=ttrpc version=3
	Nov 19 02:33:34 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:34.294883900Z" level=info msg="StartContainer for \"e773989cb5b9719f34c18e9670a458f821a72e3b0c1f48c1667978ae16fa12a4\" returns successfully"
	Nov 19 02:33:34 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:34.299330808Z" level=info msg="StartContainer for \"1e139eec825de9114abd6701b9ab42ee2b8ab9b766ece6ead08550a8ad647722\" returns successfully"
	Nov 19 02:33:37 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:37.944509536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:90639f81-cb90-45ed-a6f9-0112e27e5bcb,Namespace:default,Attempt:0,}"
	Nov 19 02:33:37 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:37.998189490Z" level=info msg="connecting to shim 84edcb21162b28e6c4334781fde734e7818dc08098d3e7b6f9bebcbdd7484a77" address="unix:///run/containerd/s/8aad71720f1e0fd951e14bd3c26cd9557b67ca3cc26df8334d136754eab93e47" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 02:33:38 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:38.079356801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:90639f81-cb90-45ed-a6f9-0112e27e5bcb,Namespace:default,Attempt:0,} returns sandbox id \"84edcb21162b28e6c4334781fde734e7818dc08098d3e7b6f9bebcbdd7484a77\""
	Nov 19 02:33:38 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:38.081263124Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.375425944Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.376680652Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396644"
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.377796397Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.380436656Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.380913727Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.299607364s"
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.380948526Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.382715067Z" level=info msg="CreateContainer within sandbox \"84edcb21162b28e6c4334781fde734e7818dc08098d3e7b6f9bebcbdd7484a77\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.390767649Z" level=info msg="Container 365ce1a4e43ef3582dc9c7cdeac6a51a19501124788117bafd9ac6879a6e8f3e: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.397566913Z" level=info msg="CreateContainer within sandbox \"84edcb21162b28e6c4334781fde734e7818dc08098d3e7b6f9bebcbdd7484a77\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"365ce1a4e43ef3582dc9c7cdeac6a51a19501124788117bafd9ac6879a6e8f3e\""
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.398187959Z" level=info msg="StartContainer for \"365ce1a4e43ef3582dc9c7cdeac6a51a19501124788117bafd9ac6879a6e8f3e\""
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.399054796Z" level=info msg="connecting to shim 365ce1a4e43ef3582dc9c7cdeac6a51a19501124788117bafd9ac6879a6e8f3e" address="unix:///run/containerd/s/8aad71720f1e0fd951e14bd3c26cd9557b67ca3cc26df8334d136754eab93e47" protocol=ttrpc version=3
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.449054855Z" level=info msg="StartContainer for \"365ce1a4e43ef3582dc9c7cdeac6a51a19501124788117bafd9ac6879a6e8f3e\" returns successfully"
	Nov 19 02:33:47 old-k8s-version-691094 containerd[658]: E1119 02:33:47.716892     658 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [1e139eec825de9114abd6701b9ab42ee2b8ab9b766ece6ead08550a8ad647722] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41644 - 1915 "HINFO IN 1315140230493656931.2438502800312971411. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015921461s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-691094
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-691094
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=old-k8s-version-691094
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_33_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:33:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-691094
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:33:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:33:37 +0000   Wed, 19 Nov 2025 02:33:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:33:37 +0000   Wed, 19 Nov 2025 02:33:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:33:37 +0000   Wed, 19 Nov 2025 02:33:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:33:37 +0000   Wed, 19 Nov 2025 02:33:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-691094
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                3f7ebf4a-3703-452a-b0e3-7f24129d6ff8
	  Boot ID:                    fea1659d-b751-4f87-a281-819adf52de2d
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-bbvqz                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-old-k8s-version-691094                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         42s
	  kube-system                 kindnet-b9cwh                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-691094             250m (3%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-691094    200m (2%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-proxy-79df5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-691094             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  Starting                 49s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  48s (x8 over 49s)  kubelet          Node old-k8s-version-691094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 49s)  kubelet          Node old-k8s-version-691094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x7 over 49s)  kubelet          Node old-k8s-version-691094 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  48s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  42s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  42s                kubelet          Node old-k8s-version-691094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet          Node old-k8s-version-691094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet          Node old-k8s-version-691094 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node old-k8s-version-691094 event: Registered Node old-k8s-version-691094 in Controller
	  Normal  NodeReady                16s                kubelet          Node old-k8s-version-691094 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[Nov19 02:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 74 0c d7 a6 53 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[ +28.680399] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 e9 7c 92 36 13 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[Nov19 02:32] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	[  +4.552839] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +11.086189] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 76 d1 26 7f 3d 08 06
	[  +0.000377] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[  +9.270754] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a2 49 fd 34 51 3b 08 06
	[  +0.000702] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +23.593864] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 86 43 5f 18 4c 08 06
	[  +0.000495] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	
	
	==> etcd [dbc14fc0cc43a9945343d07a4033d270d1157c5a3b861d1386847247f42a1497] <==
	{"level":"info","ts":"2025-11-19T02:33:02.033025Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-11-19T02:33:02.040102Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T02:33:02.040299Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T02:33:02.04213Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T02:33:02.042472Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-19T02:33:04.985971Z","caller":"traceutil/trace.go:171","msg":"trace[658045218] transaction","detail":"{read_only:false; response_revision:186; number_of_response:1; }","duration":"139.194171ms","start":"2025-11-19T02:33:04.846735Z","end":"2025-11-19T02:33:04.98593Z","steps":["trace[658045218] 'process raft request'  (duration: 56.292983ms)","trace[658045218] 'compare'  (duration: 82.76773ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:33:17.810576Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.742417ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/disruption-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T02:33:17.810663Z","caller":"traceutil/trace.go:171","msg":"trace[1420202652] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/disruption-controller; range_end:; response_count:0; response_revision:321; }","duration":"138.866997ms","start":"2025-11-19T02:33:17.671781Z","end":"2025-11-19T02:33:17.810648Z","steps":["trace[1420202652] 'range keys from in-memory index tree'  (duration: 138.650303ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:33:18.046425Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.242001ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T02:33:18.046593Z","caller":"traceutil/trace.go:171","msg":"trace[71002672] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:0; response_revision:322; }","duration":"124.423394ms","start":"2025-11-19T02:33:17.922148Z","end":"2025-11-19T02:33:18.046571Z","steps":["trace[71002672] 'range keys from in-memory index tree'  (duration: 124.156675ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:33:18.259188Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.027933ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-691094\" ","response":"range_response_count:1 size:4323"}
	{"level":"info","ts":"2025-11-19T02:33:18.259309Z","caller":"traceutil/trace.go:171","msg":"trace[1217585489] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-old-k8s-version-691094; range_end:; response_count:1; response_revision:323; }","duration":"111.181979ms","start":"2025-11-19T02:33:18.148101Z","end":"2025-11-19T02:33:18.259282Z","steps":["trace[1217585489] 'range keys from in-memory index tree'  (duration: 110.919931ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:18.388325Z","caller":"traceutil/trace.go:171","msg":"trace[907749827] transaction","detail":"{read_only:false; response_revision:325; number_of_response:1; }","duration":"121.659188ms","start":"2025-11-19T02:33:18.266633Z","end":"2025-11-19T02:33:18.388292Z","steps":["trace[907749827] 'process raft request'  (duration: 100.125362ms)","trace[907749827] 'compare'  (duration: 21.381062ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:33:18.455915Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.143906ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-19T02:33:18.455992Z","caller":"traceutil/trace.go:171","msg":"trace[960144194] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:327; }","duration":"117.233633ms","start":"2025-11-19T02:33:18.338743Z","end":"2025-11-19T02:33:18.455976Z","steps":["trace[960144194] 'agreement among raft nodes before linearized reading'  (duration: 117.101216ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:33:18.45598Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.711295ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" ","response":"range_response_count:1 size:203"}
	{"level":"info","ts":"2025-11-19T02:33:18.45604Z","caller":"traceutil/trace.go:171","msg":"trace[783663048] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:327; }","duration":"129.779137ms","start":"2025-11-19T02:33:18.326242Z","end":"2025-11-19T02:33:18.456021Z","steps":["trace[783663048] 'agreement among raft nodes before linearized reading'  (duration: 129.651787ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:24.321121Z","caller":"traceutil/trace.go:171","msg":"trace[1897328613] linearizableReadLoop","detail":"{readStateIndex:425; appliedIndex:424; }","duration":"106.777468ms","start":"2025-11-19T02:33:24.214325Z","end":"2025-11-19T02:33:24.321102Z","steps":["trace[1897328613] 'read index received'  (duration: 106.680216ms)","trace[1897328613] 'applied index is now lower than readState.Index'  (duration: 96.455µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:33:24.321178Z","caller":"traceutil/trace.go:171","msg":"trace[1221524606] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"145.359127ms","start":"2025-11-19T02:33:24.175789Z","end":"2025-11-19T02:33:24.321148Z","steps":["trace[1221524606] 'process raft request'  (duration: 145.189798ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:33:24.321267Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.932695ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-691094\" ","response":"range_response_count:1 size:4739"}
	{"level":"info","ts":"2025-11-19T02:33:24.321304Z","caller":"traceutil/trace.go:171","msg":"trace[1542339758] range","detail":"{range_begin:/registry/minions/old-k8s-version-691094; range_end:; response_count:1; response_revision:411; }","duration":"107.003966ms","start":"2025-11-19T02:33:24.21429Z","end":"2025-11-19T02:33:24.321294Z","steps":["trace[1542339758] 'agreement among raft nodes before linearized reading'  (duration: 106.897787ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:24.391988Z","caller":"traceutil/trace.go:171","msg":"trace[537055186] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"105.729066ms","start":"2025-11-19T02:33:24.286236Z","end":"2025-11-19T02:33:24.391965Z","steps":["trace[537055186] 'process raft request'  (duration: 105.588299ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:24.54902Z","caller":"traceutil/trace.go:171","msg":"trace[463438125] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"219.879541ms","start":"2025-11-19T02:33:24.329104Z","end":"2025-11-19T02:33:24.548984Z","steps":["trace[463438125] 'process raft request'  (duration: 199.511127ms)","trace[463438125] 'compare'  (duration: 20.266985ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:33:24.879054Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.153166ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-691094\" ","response":"range_response_count:1 size:4739"}
	{"level":"info","ts":"2025-11-19T02:33:24.879128Z","caller":"traceutil/trace.go:171","msg":"trace[886070201] range","detail":"{range_begin:/registry/minions/old-k8s-version-691094; range_end:; response_count:1; response_revision:413; }","duration":"165.246043ms","start":"2025-11-19T02:33:24.713866Z","end":"2025-11-19T02:33:24.879112Z","steps":["trace[886070201] 'range keys from in-memory index tree'  (duration: 165.042303ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:33:49 up  1:16,  0 user,  load average: 5.31, 3.83, 2.54
	Linux old-k8s-version-691094 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dda3cde60adcefe6dc905f202c5021fdb56f1c94c37adce1fdae5c18d6080acc] <==
	I1119 02:33:23.381383       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:33:23.381729       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1119 02:33:23.381928       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:33:23.381949       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:33:23.381981       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:33:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:33:23.680627       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:33:23.680906       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:33:23.680921       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:33:23.780187       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:33:24.081044       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:33:24.081082       1 metrics.go:72] Registering metrics
	I1119 02:33:24.081144       1 controller.go:711] "Syncing nftables rules"
	I1119 02:33:33.680704       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:33:33.680751       1 main.go:301] handling current node
	I1119 02:33:43.681343       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:33:43.681445       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2710c5af3eee6491ef45de25344cda5fa8a6bddc3604a03908e7ec36cc3ec259] <==
	I1119 02:33:03.449188       1 shared_informer.go:318] Caches are synced for configmaps
	I1119 02:33:03.450069       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1119 02:33:03.451682       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1119 02:33:03.451712       1 aggregator.go:166] initial CRD sync complete...
	I1119 02:33:03.451721       1 autoregister_controller.go:141] Starting autoregister controller
	I1119 02:33:03.451728       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:33:03.451870       1 cache.go:39] Caches are synced for autoregister controller
	I1119 02:33:03.451900       1 controller.go:624] quota admission added evaluator for: namespaces
	I1119 02:33:03.454460       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1119 02:33:03.652666       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:33:04.364452       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 02:33:04.370792       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 02:33:04.370811       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:33:05.242226       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:33:05.293279       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:33:05.360305       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 02:33:05.367019       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1119 02:33:05.368321       1 controller.go:624] quota admission added evaluator for: endpoints
	I1119 02:33:05.374006       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:33:05.419797       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 02:33:06.994734       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 02:33:07.008440       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 02:33:07.022009       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1119 02:33:18.778955       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1119 02:33:19.128034       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [ae40aa345e79cbe278439afee2a5038c48c1ac05f3405d97259e5af73e3fbf92] <==
	I1119 02:33:18.479638       1 shared_informer.go:318] Caches are synced for resource quota
	I1119 02:33:18.481935       1 shared_informer.go:318] Caches are synced for attach detach
	I1119 02:33:18.565340       1 shared_informer.go:318] Caches are synced for resource quota
	I1119 02:33:18.783277       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1119 02:33:18.883719       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 02:33:18.927024       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 02:33:18.927059       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 02:33:19.141958       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-79df5"
	I1119 02:33:19.147560       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-b9cwh"
	I1119 02:33:19.292248       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-hbwsw"
	I1119 02:33:19.320651       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-bbvqz"
	I1119 02:33:19.334988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="551.937804ms"
	I1119 02:33:19.346766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.713541ms"
	I1119 02:33:19.347224       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.884µs"
	I1119 02:33:19.347583       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="248.565µs"
	I1119 02:33:19.743275       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1119 02:33:19.759521       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-hbwsw"
	I1119 02:33:19.767623       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="24.389324ms"
	I1119 02:33:19.777179       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.488995ms"
	I1119 02:33:19.777312       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.241µs"
	I1119 02:33:33.782164       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="109.659µs"
	I1119 02:33:33.799126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="108.486µs"
	I1119 02:33:35.226953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.284192ms"
	I1119 02:33:35.227058       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.818µs"
	I1119 02:33:38.373535       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [5dde09d6b5534707795709157ee81edeb05e31172278aaf5526347ba15edf149] <==
	I1119 02:33:19.808172       1 server_others.go:69] "Using iptables proxy"
	I1119 02:33:19.820176       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1119 02:33:19.845599       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:33:19.848312       1 server_others.go:152] "Using iptables Proxier"
	I1119 02:33:19.848362       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 02:33:19.848394       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 02:33:19.848428       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 02:33:19.848742       1 server.go:846] "Version info" version="v1.28.0"
	I1119 02:33:19.848757       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:33:19.849540       1 config.go:97] "Starting endpoint slice config controller"
	I1119 02:33:19.849569       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 02:33:19.849599       1 config.go:188] "Starting service config controller"
	I1119 02:33:19.849621       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 02:33:19.849806       1 config.go:315] "Starting node config controller"
	I1119 02:33:19.849822       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 02:33:19.949953       1 shared_informer.go:318] Caches are synced for node config
	I1119 02:33:19.949980       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1119 02:33:19.949995       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [b77b79fa6a466aa3e18c8bd7eba3c607337982e750126d443bc923b253db1773] <==
	W1119 02:33:04.399499       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1119 02:33:04.399543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1119 02:33:04.424049       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1119 02:33:04.424093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1119 02:33:04.458386       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1119 02:33:04.458837       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1119 02:33:04.470115       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1119 02:33:04.470164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1119 02:33:04.561554       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1119 02:33:04.561594       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1119 02:33:04.673671       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1119 02:33:04.673712       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1119 02:33:04.688034       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1119 02:33:04.688077       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1119 02:33:04.688037       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1119 02:33:04.688108       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 02:33:04.689780       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1119 02:33:04.689824       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1119 02:33:04.704173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1119 02:33:04.704221       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1119 02:33:04.736082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1119 02:33:04.736401       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1119 02:33:04.770743       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1119 02:33:04.770839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1119 02:33:06.915816       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 02:33:18 old-k8s-version-691094 kubelet[1520]: I1119 02:33:18.503691    1520 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.150157    1520 topology_manager.go:215] "Topology Admit Handler" podUID="d23dd2d3-6511-45fb-ae70-d1da7b9b6b28" podNamespace="kube-system" podName="kube-proxy-79df5"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.159071    1520 topology_manager.go:215] "Topology Admit Handler" podUID="3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352" podNamespace="kube-system" podName="kindnet-b9cwh"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.262847    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352-xtables-lock\") pod \"kindnet-b9cwh\" (UID: \"3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352\") " pod="kube-system/kindnet-b9cwh"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.262970    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352-lib-modules\") pod \"kindnet-b9cwh\" (UID: \"3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352\") " pod="kube-system/kindnet-b9cwh"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.263115    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88q7s\" (UniqueName: \"kubernetes.io/projected/3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352-kube-api-access-88q7s\") pod \"kindnet-b9cwh\" (UID: \"3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352\") " pod="kube-system/kindnet-b9cwh"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.263270    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d23dd2d3-6511-45fb-ae70-d1da7b9b6b28-kube-proxy\") pod \"kube-proxy-79df5\" (UID: \"d23dd2d3-6511-45fb-ae70-d1da7b9b6b28\") " pod="kube-system/kube-proxy-79df5"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.263312    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nccg9\" (UniqueName: \"kubernetes.io/projected/d23dd2d3-6511-45fb-ae70-d1da7b9b6b28-kube-api-access-nccg9\") pod \"kube-proxy-79df5\" (UID: \"d23dd2d3-6511-45fb-ae70-d1da7b9b6b28\") " pod="kube-system/kube-proxy-79df5"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.263480    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352-cni-cfg\") pod \"kindnet-b9cwh\" (UID: \"3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352\") " pod="kube-system/kindnet-b9cwh"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.263516    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d23dd2d3-6511-45fb-ae70-d1da7b9b6b28-xtables-lock\") pod \"kube-proxy-79df5\" (UID: \"d23dd2d3-6511-45fb-ae70-d1da7b9b6b28\") " pod="kube-system/kube-proxy-79df5"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.263683    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d23dd2d3-6511-45fb-ae70-d1da7b9b6b28-lib-modules\") pod \"kube-proxy-79df5\" (UID: \"d23dd2d3-6511-45fb-ae70-d1da7b9b6b28\") " pod="kube-system/kube-proxy-79df5"
	Nov 19 02:33:20 old-k8s-version-691094 kubelet[1520]: I1119 02:33:20.171191    1520 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-79df5" podStartSLOduration=1.171130716 podCreationTimestamp="2025-11-19 02:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:20.171028011 +0000 UTC m=+13.209359984" watchObservedRunningTime="2025-11-19 02:33:20.171130716 +0000 UTC m=+13.209462689"
	Nov 19 02:33:33 old-k8s-version-691094 kubelet[1520]: I1119 02:33:33.749213    1520 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 19 02:33:33 old-k8s-version-691094 kubelet[1520]: I1119 02:33:33.782895    1520 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-b9cwh" podStartSLOduration=11.682944737 podCreationTimestamp="2025-11-19 02:33:19 +0000 UTC" firstStartedPulling="2025-11-19 02:33:19.906795407 +0000 UTC m=+12.945127373" lastFinishedPulling="2025-11-19 02:33:23.006683769 +0000 UTC m=+16.045015737" observedRunningTime="2025-11-19 02:33:24.325010453 +0000 UTC m=+17.363342437" watchObservedRunningTime="2025-11-19 02:33:33.782833101 +0000 UTC m=+26.821165074"
	Nov 19 02:33:33 old-k8s-version-691094 kubelet[1520]: I1119 02:33:33.784096    1520 topology_manager.go:215] "Topology Admit Handler" podUID="56c0e21e-9d86-46c6-bc02-2a75554c0f07" podNamespace="kube-system" podName="coredns-5dd5756b68-bbvqz"
	Nov 19 02:33:33 old-k8s-version-691094 kubelet[1520]: I1119 02:33:33.784305    1520 topology_manager.go:215] "Topology Admit Handler" podUID="135636ea-f34f-4bfc-b2f6-cbbf3e91ca30" podNamespace="kube-system" podName="storage-provisioner"
	Nov 19 02:33:33 old-k8s-version-691094 kubelet[1520]: I1119 02:33:33.865360    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56c0e21e-9d86-46c6-bc02-2a75554c0f07-config-volume\") pod \"coredns-5dd5756b68-bbvqz\" (UID: \"56c0e21e-9d86-46c6-bc02-2a75554c0f07\") " pod="kube-system/coredns-5dd5756b68-bbvqz"
	Nov 19 02:33:33 old-k8s-version-691094 kubelet[1520]: I1119 02:33:33.865506    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf7fj\" (UniqueName: \"kubernetes.io/projected/135636ea-f34f-4bfc-b2f6-cbbf3e91ca30-kube-api-access-sf7fj\") pod \"storage-provisioner\" (UID: \"135636ea-f34f-4bfc-b2f6-cbbf3e91ca30\") " pod="kube-system/storage-provisioner"
	Nov 19 02:33:33 old-k8s-version-691094 kubelet[1520]: I1119 02:33:33.865599    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnwp6\" (UniqueName: \"kubernetes.io/projected/56c0e21e-9d86-46c6-bc02-2a75554c0f07-kube-api-access-rnwp6\") pod \"coredns-5dd5756b68-bbvqz\" (UID: \"56c0e21e-9d86-46c6-bc02-2a75554c0f07\") " pod="kube-system/coredns-5dd5756b68-bbvqz"
	Nov 19 02:33:33 old-k8s-version-691094 kubelet[1520]: I1119 02:33:33.865640    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/135636ea-f34f-4bfc-b2f6-cbbf3e91ca30-tmp\") pod \"storage-provisioner\" (UID: \"135636ea-f34f-4bfc-b2f6-cbbf3e91ca30\") " pod="kube-system/storage-provisioner"
	Nov 19 02:33:35 old-k8s-version-691094 kubelet[1520]: I1119 02:33:35.207665    1520 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.207611574 podCreationTimestamp="2025-11-19 02:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:35.207180919 +0000 UTC m=+28.245512895" watchObservedRunningTime="2025-11-19 02:33:35.207611574 +0000 UTC m=+28.245943607"
	Nov 19 02:33:37 old-k8s-version-691094 kubelet[1520]: I1119 02:33:37.634226    1520 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-bbvqz" podStartSLOduration=18.634166172 podCreationTimestamp="2025-11-19 02:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:35.220124215 +0000 UTC m=+28.258456188" watchObservedRunningTime="2025-11-19 02:33:37.634166172 +0000 UTC m=+30.672498146"
	Nov 19 02:33:37 old-k8s-version-691094 kubelet[1520]: I1119 02:33:37.634483    1520 topology_manager.go:215] "Topology Admit Handler" podUID="90639f81-cb90-45ed-a6f9-0112e27e5bcb" podNamespace="default" podName="busybox"
	Nov 19 02:33:37 old-k8s-version-691094 kubelet[1520]: I1119 02:33:37.690929    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7znp\" (UniqueName: \"kubernetes.io/projected/90639f81-cb90-45ed-a6f9-0112e27e5bcb-kube-api-access-f7znp\") pod \"busybox\" (UID: \"90639f81-cb90-45ed-a6f9-0112e27e5bcb\") " pod="default/busybox"
	Nov 19 02:33:41 old-k8s-version-691094 kubelet[1520]: I1119 02:33:41.220011    1520 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.919660586 podCreationTimestamp="2025-11-19 02:33:37 +0000 UTC" firstStartedPulling="2025-11-19 02:33:38.080932747 +0000 UTC m=+31.119264714" lastFinishedPulling="2025-11-19 02:33:40.381236106 +0000 UTC m=+33.419568070" observedRunningTime="2025-11-19 02:33:41.219704244 +0000 UTC m=+34.258036238" watchObservedRunningTime="2025-11-19 02:33:41.219963942 +0000 UTC m=+34.258295913"
	
	
	==> storage-provisioner [e773989cb5b9719f34c18e9670a458f821a72e3b0c1f48c1667978ae16fa12a4] <==
	I1119 02:33:34.305768       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:33:34.314850       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:33:34.314906       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1119 02:33:34.323210       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:33:34.323287       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"136e4121-044b-4dee-aaad-3e5583b7c2c1", APIVersion:"v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-691094_76aba13c-9d9b-4e98-851a-12b3d58d7b2a became leader
	I1119 02:33:34.323354       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-691094_76aba13c-9d9b-4e98-851a-12b3d58d7b2a!
	I1119 02:33:34.423715       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-691094_76aba13c-9d9b-4e98-851a-12b3d58d7b2a!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-691094 -n old-k8s-version-691094
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-691094 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-691094
helpers_test.go:243: (dbg) docker inspect old-k8s-version-691094:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "839df93ceb63d0d13317decf25c2c4eaccc915d4750cfa4a087069705153e5fd",
	        "Created": "2025-11-19T02:32:51.932562407Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304783,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:32:51.978861725Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/839df93ceb63d0d13317decf25c2c4eaccc915d4750cfa4a087069705153e5fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/839df93ceb63d0d13317decf25c2c4eaccc915d4750cfa4a087069705153e5fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/839df93ceb63d0d13317decf25c2c4eaccc915d4750cfa4a087069705153e5fd/hosts",
	        "LogPath": "/var/lib/docker/containers/839df93ceb63d0d13317decf25c2c4eaccc915d4750cfa4a087069705153e5fd/839df93ceb63d0d13317decf25c2c4eaccc915d4750cfa4a087069705153e5fd-json.log",
	        "Name": "/old-k8s-version-691094",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-691094:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-691094",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "839df93ceb63d0d13317decf25c2c4eaccc915d4750cfa4a087069705153e5fd",
	                "LowerDir": "/var/lib/docker/overlay2/d9d29317fe080187b4ac955f19c3b70929277bc9d433b324633c36af9102372e-init/diff:/var/lib/docker/overlay2/de7938e6a920c133c8c6b988444cfbf6706fdc6982445229ca70e2488a725edb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9d29317fe080187b4ac955f19c3b70929277bc9d433b324633c36af9102372e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9d29317fe080187b4ac955f19c3b70929277bc9d433b324633c36af9102372e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9d29317fe080187b4ac955f19c3b70929277bc9d433b324633c36af9102372e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-691094",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-691094/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-691094",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-691094",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-691094",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "25d822acf28637fa4cce4fc25c4664674f3bbb16e082b090f611dcae48313299",
	            "SandboxKey": "/var/run/docker/netns/25d822acf286",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-691094": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4c0ba205c5e4031c33eff77f74c332bc7353ce431fe839a2e2d9f73a15968b57",
	                    "EndpointID": "44f08eca5a7de78215b0c2d3109731c14bba8acc1511177c790890960c94d079",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "76:b6:e5:21:42:df",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-691094",
	                        "839df93ceb63"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-691094 -n old-k8s-version-691094
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-691094 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-691094 logs -n 25: (1.062976489s)
E1119 02:33:51.238662   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-212776 sudo systemctl status kubelet --all --full --no-pager                                                                                       │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl cat kubelet --no-pager                                                                                                       │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                        │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/kubernetes/kubelet.conf                                                                                                       │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /var/lib/kubelet/config.yaml                                                                                                       │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status docker --all --full --no-pager                                                                                        │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat docker --no-pager                                                                                                        │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/docker/daemon.json                                                                                                            │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo docker system info                                                                                                                     │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl status cri-docker --all --full --no-pager                                                                                    │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat cri-docker --no-pager                                                                                                    │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                               │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                         │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cri-dockerd --version                                                                                                                  │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status containerd --all --full --no-pager                                                                                    │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl cat containerd --no-pager                                                                                                    │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /lib/systemd/system/containerd.service                                                                                             │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/containerd/config.toml                                                                                                        │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo containerd config dump                                                                                                                 │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status crio --all --full --no-pager                                                                                          │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat crio --no-pager                                                                                                          │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo crio config                                                                                                                            │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ delete  │ -p bridge-212776                                                                                                                                             │ bridge-212776      │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ start   │ -p embed-certs-168452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ embed-certs-168452 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:33:19
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:33:19.818158  315363 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:33:19.818478  315363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:33:19.818490  315363 out.go:374] Setting ErrFile to fd 2...
	I1119 02:33:19.818495  315363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:33:19.818721  315363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:33:19.819330  315363 out.go:368] Setting JSON to false
	I1119 02:33:19.820616  315363 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4540,"bootTime":1763515060,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:33:19.820746  315363 start.go:143] virtualization: kvm guest
	I1119 02:33:19.822862  315363 out.go:179] * [embed-certs-168452] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:33:19.824498  315363 notify.go:221] Checking for updates...
	I1119 02:33:19.825083  315363 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:33:19.827189  315363 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:33:19.828628  315363 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:33:19.830282  315363 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	I1119 02:33:19.832156  315363 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:33:19.833558  315363 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:33:19.835289  315363 config.go:182] Loaded profile config "kubernetes-upgrade-896338": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:19.835456  315363 config.go:182] Loaded profile config "no-preload-483142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:19.835531  315363 config.go:182] Loaded profile config "old-k8s-version-691094": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 02:33:19.835628  315363 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:33:19.869670  315363 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:33:19.869754  315363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:33:19.948056  315363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-19 02:33:19.935291829 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:33:19.948230  315363 docker.go:319] overlay module found
	I1119 02:33:19.949713  315363 out.go:179] * Using the docker driver based on user configuration
	I1119 02:33:19.290831  301934 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:19.290855  301934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:33:19.290915  301934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:33:19.311399  301934 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:19.311423  301934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:33:19.311589  301934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:33:19.329209  301934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:33:19.348646  301934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:33:19.386878  301934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:33:19.430928  301934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:19.450594  301934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:19.476197  301934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:19.710133  301934 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1119 02:33:19.711417  301934 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-691094" to be "Ready" ...
	I1119 02:33:19.994360  301934 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:33:19.950788  315363 start.go:309] selected driver: docker
	I1119 02:33:19.950820  315363 start.go:930] validating driver "docker" against <nil>
	I1119 02:33:19.950835  315363 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:33:19.951688  315363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:33:20.027806  315363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-19 02:33:20.015781927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:33:20.028020  315363 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 02:33:20.028315  315363 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:33:20.030421  315363 out.go:179] * Using Docker driver with root privileges
	I1119 02:33:20.031895  315363 cni.go:84] Creating CNI manager for ""
	I1119 02:33:20.031986  315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:20.031997  315363 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:33:20.032066  315363 start.go:353] cluster config:
	{Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:33:20.034765  315363 out.go:179] * Starting "embed-certs-168452" primary control-plane node in "embed-certs-168452" cluster
	I1119 02:33:20.037487  315363 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 02:33:20.039029  315363 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:33:20.040485  315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:33:20.040520  315363 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1119 02:33:20.040528  315363 cache.go:65] Caching tarball of preloaded images
	I1119 02:33:20.040583  315363 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:33:20.040607  315363 preload.go:238] Found /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1119 02:33:20.040616  315363 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 02:33:20.040718  315363 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json ...
	I1119 02:33:20.040739  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json: {Name:mk2c1cb92572f9f7372f9d895b2c58b49c99bb3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:20.063579  315363 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:33:20.063610  315363 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:33:20.063636  315363 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:33:20.063670  315363 start.go:360] acquireMachinesLock for embed-certs-168452: {Name:mk4860299f8ff219c79992500844e49d455bd43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:33:20.063790  315363 start.go:364] duration metric: took 102.461µs to acquireMachinesLock for "embed-certs-168452"
	I1119 02:33:20.063835  315363 start.go:93] Provisioning new machine with config: &{Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:33:20.063944  315363 start.go:125] createHost starting for "" (driver="docker")
	I1119 02:33:19.995882  301934 addons.go:515] duration metric: took 741.418352ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:33:20.065989  315363 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 02:33:20.066193  315363 start.go:159] libmachine.API.Create for "embed-certs-168452" (driver="docker")
	I1119 02:33:20.066226  315363 client.go:173] LocalClient.Create starting
	I1119 02:33:20.066306  315363 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem
	I1119 02:33:20.066338  315363 main.go:143] libmachine: Decoding PEM data...
	I1119 02:33:20.066360  315363 main.go:143] libmachine: Parsing certificate...
	I1119 02:33:20.066438  315363 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem
	I1119 02:33:20.066464  315363 main.go:143] libmachine: Decoding PEM data...
	I1119 02:33:20.066475  315363 main.go:143] libmachine: Parsing certificate...
	I1119 02:33:20.066835  315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 02:33:20.087889  315363 cli_runner.go:211] docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 02:33:20.087987  315363 network_create.go:284] running [docker network inspect embed-certs-168452] to gather additional debugging logs...
	I1119 02:33:20.088020  315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452
	W1119 02:33:20.108512  315363 cli_runner.go:211] docker network inspect embed-certs-168452 returned with exit code 1
	I1119 02:33:20.108553  315363 network_create.go:287] error running [docker network inspect embed-certs-168452]: docker network inspect embed-certs-168452: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-168452 not found
	I1119 02:33:20.108577  315363 network_create.go:289] output of [docker network inspect embed-certs-168452]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-168452 not found
	
	** /stderr **
	I1119 02:33:20.108677  315363 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:33:20.129985  315363 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ed39016f2aa9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:16:a0:62:5a:51} reservation:<nil>}
	I1119 02:33:20.130640  315363 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-42b0c19d513b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b2:bf:ca:ce:21:95} reservation:<nil>}
	I1119 02:33:20.131454  315363 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-002e39e6dc05 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:8e:34:24:50:a5} reservation:<nil>}
	I1119 02:33:20.132210  315363 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c1155ea75a94 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:76:37:ad:5a:d8:36} reservation:<nil>}
	I1119 02:33:20.133253  315363 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-3ec6f45a7001 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:12:9a:69:49:8b:1f} reservation:<nil>}
	I1119 02:33:20.134343  315363 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ddf580}
	I1119 02:33:20.134393  315363 network_create.go:124] attempt to create docker network embed-certs-168452 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1119 02:33:20.134459  315363 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-168452 embed-certs-168452
	I1119 02:33:20.192566  315363 network_create.go:108] docker network embed-certs-168452 192.168.94.0/24 created
	I1119 02:33:20.192597  315363 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-168452" container
	I1119 02:33:20.192665  315363 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 02:33:20.216991  315363 cli_runner.go:164] Run: docker volume create embed-certs-168452 --label name.minikube.sigs.k8s.io=embed-certs-168452 --label created_by.minikube.sigs.k8s.io=true
	I1119 02:33:20.240868  315363 oci.go:103] Successfully created a docker volume embed-certs-168452
	I1119 02:33:20.240948  315363 cli_runner.go:164] Run: docker run --rm --name embed-certs-168452-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-168452 --entrypoint /usr/bin/test -v embed-certs-168452:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 02:33:20.653772  315363 oci.go:107] Successfully prepared a docker volume embed-certs-168452
	I1119 02:33:20.653851  315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:33:20.653866  315363 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 02:33:20.653963  315363 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-168452:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 02:33:20.215687  301934 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-691094" context rescaled to 1 replicas
	W1119 02:33:21.715210  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	W1119 02:33:24.323644  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	I1119 02:33:28.147893  307222 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:33:28.147982  307222 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:33:28.148104  307222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:33:28.148201  307222 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:33:28.148256  307222 kubeadm.go:319] OS: Linux
	I1119 02:33:28.148332  307222 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:33:28.148450  307222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:33:28.148522  307222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:33:28.148596  307222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:33:28.148672  307222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:33:28.148756  307222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:33:28.148841  307222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:33:28.148915  307222 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:33:28.149019  307222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:33:28.149159  307222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:33:28.149311  307222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:33:28.149421  307222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:33:28.151537  307222 out.go:252]   - Generating certificates and keys ...
	I1119 02:33:28.151647  307222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:33:28.151774  307222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:33:28.151834  307222 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:33:28.151902  307222 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:33:28.152000  307222 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:33:28.152068  307222 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:33:28.152179  307222 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:33:28.152343  307222 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-483142] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:33:28.152451  307222 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:33:28.152598  307222 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-483142] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:33:28.152690  307222 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:33:28.152796  307222 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:33:28.152837  307222 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:33:28.152894  307222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:33:28.152945  307222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:33:28.153002  307222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:33:28.153051  307222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:33:28.153118  307222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:33:28.153171  307222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:33:28.153255  307222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:33:28.153358  307222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:33:28.154609  307222 out.go:252]   - Booting up control plane ...
	I1119 02:33:28.154709  307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:33:28.154821  307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:33:28.154904  307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:33:28.155033  307222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:33:28.155173  307222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:33:28.155323  307222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:33:28.155456  307222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:33:28.155501  307222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:33:28.155631  307222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:33:28.155728  307222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:33:28.155805  307222 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001464049s
	I1119 02:33:28.155906  307222 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:33:28.156017  307222 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 02:33:28.156135  307222 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:33:28.156242  307222 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:33:28.156335  307222 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.319882231s
	I1119 02:33:28.156456  307222 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.432703999s
	I1119 02:33:28.156560  307222 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001475545s
	I1119 02:33:28.156685  307222 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:33:28.156832  307222 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:33:28.156917  307222 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:33:28.157202  307222 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-483142 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:33:28.157272  307222 kubeadm.go:319] [bootstrap-token] Using token: nwrx92.0c942uuundzydmcz
	I1119 02:33:28.159046  307222 out.go:252]   - Configuring RBAC rules ...
	I1119 02:33:28.159207  307222 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:33:28.159328  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:33:28.159549  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:33:28.159720  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:33:28.159922  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:33:28.160077  307222 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:33:28.160254  307222 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:33:28.160329  307222 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:33:28.160427  307222 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:33:28.160443  307222 kubeadm.go:319] 
	I1119 02:33:28.160527  307222 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:33:28.160536  307222 kubeadm.go:319] 
	I1119 02:33:28.160603  307222 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:33:28.160610  307222 kubeadm.go:319] 
	I1119 02:33:28.160642  307222 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:33:28.160730  307222 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:33:28.160832  307222 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:33:28.160845  307222 kubeadm.go:319] 
	I1119 02:33:28.160922  307222 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:33:28.160942  307222 kubeadm.go:319] 
	I1119 02:33:28.161016  307222 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:33:28.161031  307222 kubeadm.go:319] 
	I1119 02:33:28.161114  307222 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:33:28.161229  307222 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:33:28.161347  307222 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:33:28.161359  307222 kubeadm.go:319] 
	I1119 02:33:28.161531  307222 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:33:28.161656  307222 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:33:28.161665  307222 kubeadm.go:319] 
	I1119 02:33:28.161797  307222 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token nwrx92.0c942uuundzydmcz \
	I1119 02:33:28.161968  307222 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a \
	I1119 02:33:28.162022  307222 kubeadm.go:319] 	--control-plane 
	I1119 02:33:28.162036  307222 kubeadm.go:319] 
	I1119 02:33:28.162163  307222 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:33:28.162174  307222 kubeadm.go:319] 
	I1119 02:33:28.162301  307222 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token nwrx92.0c942uuundzydmcz \
	I1119 02:33:28.162456  307222 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a 
	I1119 02:33:28.162469  307222 cni.go:84] Creating CNI manager for ""
	I1119 02:33:28.162475  307222 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:28.164382  307222 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:33:25.786283  315363 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-168452:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.132274902s)
	I1119 02:33:25.786322  315363 kic.go:203] duration metric: took 5.132452147s to extract preloaded images to volume ...
	W1119 02:33:25.786460  315363 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 02:33:25.786504  315363 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 02:33:25.786554  315363 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:33:25.853413  315363 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-168452 --name embed-certs-168452 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-168452 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-168452 --network embed-certs-168452 --ip 192.168.94.2 --volume embed-certs-168452:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:33:26.238651  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Running}}
	I1119 02:33:26.261169  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:26.284313  315363 cli_runner.go:164] Run: docker exec embed-certs-168452 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:33:26.336955  315363 oci.go:144] the created container "embed-certs-168452" has a running status.
	I1119 02:33:26.336985  315363 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa...
	I1119 02:33:26.484310  315363 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:33:26.517116  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:26.542901  315363 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:33:26.542925  315363 kic_runner.go:114] Args: [docker exec --privileged embed-certs-168452 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:33:26.595205  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:26.623359  315363 machine.go:94] provisionDockerMachine start ...
	I1119 02:33:26.623527  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:26.646254  315363 main.go:143] libmachine: Using SSH client type: native
	I1119 02:33:26.646550  315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1119 02:33:26.646569  315363 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:33:26.799221  315363 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-168452
	
	I1119 02:33:26.799250  315363 ubuntu.go:182] provisioning hostname "embed-certs-168452"
	I1119 02:33:26.799334  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:26.820929  315363 main.go:143] libmachine: Using SSH client type: native
	I1119 02:33:26.821188  315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1119 02:33:26.821210  315363 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-168452 && echo "embed-certs-168452" | sudo tee /etc/hostname
	I1119 02:33:26.966035  315363 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-168452
	
	I1119 02:33:26.966125  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:26.985276  315363 main.go:143] libmachine: Using SSH client type: native
	I1119 02:33:26.985598  315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1119 02:33:26.985633  315363 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-168452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-168452/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-168452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:33:27.121670  315363 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:33:27.121703  315363 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11107/.minikube}
	I1119 02:33:27.121727  315363 ubuntu.go:190] setting up certificates
	I1119 02:33:27.123000  315363 provision.go:84] configureAuth start
	I1119 02:33:27.123195  315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
	I1119 02:33:27.143490  315363 provision.go:143] copyHostCerts
	I1119 02:33:27.143570  315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem, removing ...
	I1119 02:33:27.143580  315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem
	I1119 02:33:27.143645  315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem (1082 bytes)
	I1119 02:33:27.143736  315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem, removing ...
	I1119 02:33:27.143744  315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem
	I1119 02:33:27.143773  315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem (1123 bytes)
	I1119 02:33:27.143829  315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem, removing ...
	I1119 02:33:27.143835  315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem
	I1119 02:33:27.143858  315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem (1675 bytes)
	I1119 02:33:27.143923  315363 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem org=jenkins.embed-certs-168452 san=[127.0.0.1 192.168.94.2 embed-certs-168452 localhost minikube]
	I1119 02:33:27.239080  315363 provision.go:177] copyRemoteCerts
	I1119 02:33:27.239165  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:33:27.239198  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.262397  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.362967  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:33:27.387666  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1119 02:33:27.418735  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:33:27.446098  315363 provision.go:87] duration metric: took 323.082791ms to configureAuth
	I1119 02:33:27.446129  315363 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:33:27.446313  315363 config.go:182] Loaded profile config "embed-certs-168452": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:27.446327  315363 machine.go:97] duration metric: took 822.891862ms to provisionDockerMachine
	I1119 02:33:27.446333  315363 client.go:176] duration metric: took 7.38010166s to LocalClient.Create
	I1119 02:33:27.446351  315363 start.go:167] duration metric: took 7.380160884s to libmachine.API.Create "embed-certs-168452"
	I1119 02:33:27.446358  315363 start.go:293] postStartSetup for "embed-certs-168452" (driver="docker")
	I1119 02:33:27.446409  315363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:33:27.446465  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:33:27.446501  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.470807  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.575097  315363 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:33:27.580067  315363 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:33:27.580102  315363 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:33:27.580115  315363 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/addons for local assets ...
	I1119 02:33:27.580188  315363 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/files for local assets ...
	I1119 02:33:27.580303  315363 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem -> 146572.pem in /etc/ssl/certs
	I1119 02:33:27.580434  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:33:27.588848  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:33:27.611498  315363 start.go:296] duration metric: took 165.12815ms for postStartSetup
	I1119 02:33:27.611895  315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
	I1119 02:33:27.630987  315363 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json ...
	I1119 02:33:27.631276  315363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:33:27.631327  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.650599  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.747119  315363 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:33:27.752242  315363 start.go:128] duration metric: took 7.68828048s to createHost
	I1119 02:33:27.752270  315363 start.go:83] releasing machines lock for "embed-certs-168452", held for 7.688466151s
	I1119 02:33:27.752448  315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
	I1119 02:33:27.772595  315363 ssh_runner.go:195] Run: cat /version.json
	I1119 02:33:27.772634  315363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:33:27.772668  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.772695  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.795020  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.795311  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.889466  315363 ssh_runner.go:195] Run: systemctl --version
	I1119 02:33:27.948057  315363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:33:27.953270  315363 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:33:27.953328  315363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:33:27.979962  315363 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 02:33:27.979983  315363 start.go:496] detecting cgroup driver to use...
	I1119 02:33:27.980013  315363 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:33:27.980050  315363 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 02:33:27.995148  315363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 02:33:28.009176  315363 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:33:28.009239  315363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:33:28.028120  315363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:33:28.047654  315363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:33:28.137742  315363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:33:28.233503  315363 docker.go:234] disabling docker service ...
	I1119 02:33:28.233569  315363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:33:28.254546  315363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:33:28.270970  315363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:33:28.372358  315363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:33:28.475816  315363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:33:28.494447  315363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:33:28.514112  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 02:33:28.528713  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 02:33:28.542307  315363 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 02:33:28.542395  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 02:33:28.553682  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:33:28.564425  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 02:33:28.574563  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:33:28.585047  315363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:33:28.594876  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 02:33:28.606066  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 02:33:28.616549  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 02:33:28.627283  315363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:33:28.635846  315363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:33:28.643854  315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:28.727138  315363 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 02:33:28.825075  315363 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 02:33:28.825141  315363 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 02:33:28.829886  315363 start.go:564] Will wait 60s for crictl version
	I1119 02:33:28.829954  315363 ssh_runner.go:195] Run: which crictl
	I1119 02:33:28.834062  315363 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:33:28.859386  315363 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 02:33:28.859454  315363 ssh_runner.go:195] Run: containerd --version
	I1119 02:33:28.881932  315363 ssh_runner.go:195] Run: containerd --version
	I1119 02:33:28.905418  315363 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 02:33:28.906851  315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:33:28.925576  315363 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1119 02:33:28.930043  315363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:33:28.941472  315363 kubeadm.go:884] updating cluster {Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:33:28.941570  315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:33:28.941633  315363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:33:28.969084  315363 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:33:28.969102  315363 containerd.go:534] Images already preloaded, skipping extraction
	I1119 02:33:28.969159  315363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:33:28.994529  315363 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:33:28.994549  315363 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:33:28.994556  315363 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
	I1119 02:33:28.994637  315363 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-168452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:33:28.994694  315363 ssh_runner.go:195] Run: sudo crictl info
	I1119 02:33:29.023174  315363 cni.go:84] Creating CNI manager for ""
	I1119 02:33:29.023197  315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:29.023211  315363 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:33:29.023232  315363 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-168452 NodeName:embed-certs-168452 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:33:29.023337  315363 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-168452"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:33:29.023423  315363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:33:29.032358  315363 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:33:29.032438  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:33:29.041206  315363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1119 02:33:29.056159  315363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:33:29.074583  315363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1119 02:33:29.089316  315363 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:33:29.093854  315363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:33:29.106602  315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:29.193818  315363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:29.220027  315363 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452 for IP: 192.168.94.2
	I1119 02:33:29.220053  315363 certs.go:195] generating shared ca certs ...
	I1119 02:33:29.220075  315363 certs.go:227] acquiring lock for ca certs: {Name:mk11d6789b2333e17b3937495b501fbcca15c242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.220231  315363 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key
	I1119 02:33:29.220278  315363 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key
	I1119 02:33:29.220287  315363 certs.go:257] generating profile certs ...
	I1119 02:33:29.220334  315363 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key
	I1119 02:33:29.220351  315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt with IP's: []
	I1119 02:33:29.496773  315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt ...
	I1119 02:33:29.496800  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt: {Name:mkdb5e24f9c8b0d3d9849ba91ac24e28be0abdf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.496993  315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key ...
	I1119 02:33:29.497006  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key: {Name:mk5aa88fe9180cc5f94c07d5a968428b4ccf37cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.497088  315363 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2
	I1119 02:33:29.497102  315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	W1119 02:33:26.721525  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	W1119 02:33:29.215940  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	I1119 02:33:28.165835  307222 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:33:28.176028  307222 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:33:28.176052  307222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:33:28.195615  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:33:28.450816  307222 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:33:28.450899  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:28.450933  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-483142 minikube.k8s.io/updated_at=2025_11_19T02_33_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=no-preload-483142 minikube.k8s.io/primary=true
	I1119 02:33:28.538275  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:28.538445  307222 ops.go:34] apiserver oom_adj: -16
	I1119 02:33:29.038968  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:29.539224  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:30.038530  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:30.539271  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:31.038434  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:31.538496  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:32.038945  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:32.539001  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:33.038571  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:33.129034  307222 kubeadm.go:1114] duration metric: took 4.678195875s to wait for elevateKubeSystemPrivileges
	I1119 02:33:33.129095  307222 kubeadm.go:403] duration metric: took 17.40558167s to StartCluster
	I1119 02:33:33.129119  307222 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:33.129202  307222 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:33:33.131182  307222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:33.131481  307222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:33:33.131519  307222 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:33:33.131585  307222 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:33:33.131706  307222 addons.go:70] Setting storage-provisioner=true in profile "no-preload-483142"
	I1119 02:33:33.131748  307222 config.go:182] Loaded profile config "no-preload-483142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:33.131794  307222 addons.go:70] Setting default-storageclass=true in profile "no-preload-483142"
	I1119 02:33:33.131827  307222 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-483142"
	I1119 02:33:33.131810  307222 addons.go:239] Setting addon storage-provisioner=true in "no-preload-483142"
	I1119 02:33:33.131959  307222 host.go:66] Checking if "no-preload-483142" exists ...
	I1119 02:33:33.132200  307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
	I1119 02:33:33.132480  307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
	I1119 02:33:33.134152  307222 out.go:179] * Verifying Kubernetes components...
	I1119 02:33:33.135585  307222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:33.159834  307222 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:33:33.160479  307222 addons.go:239] Setting addon default-storageclass=true in "no-preload-483142"
	I1119 02:33:33.160545  307222 host.go:66] Checking if "no-preload-483142" exists ...
	I1119 02:33:33.161017  307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
	I1119 02:33:33.161390  307222 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:33.161410  307222 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:33:33.161458  307222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-483142
	I1119 02:33:33.198354  307222 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:33.198390  307222 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:33:33.198448  307222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-483142
	I1119 02:33:33.198522  307222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/no-preload-483142/id_rsa Username:docker}
	I1119 02:33:33.223657  307222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/no-preload-483142/id_rsa Username:docker}
	I1119 02:33:33.248952  307222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:33:33.322673  307222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:33.348662  307222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:33.354901  307222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:33.503051  307222 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 02:33:33.504327  307222 node_ready.go:35] waiting up to 6m0s for node "no-preload-483142" to be "Ready" ...
	I1119 02:33:33.756829  307222 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:33:29.844643  315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 ...
	I1119 02:33:29.844667  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2: {Name:mk1596cf7137a998e277abf94c4c839907009a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.844872  315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2 ...
	I1119 02:33:29.844901  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2: {Name:mk9d817ab63555ebb02e0590916ce23352cf008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.845022  315363 certs.go:382] copying /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 -> /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt
	I1119 02:33:29.845144  315363 certs.go:386] copying /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2 -> /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key
	I1119 02:33:29.845239  315363 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key
	I1119 02:33:29.845260  315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt with IP's: []
	I1119 02:33:30.013529  315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt ...
	I1119 02:33:30.013564  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt: {Name:mka683634a30ab1845434f0fc49f75059694b447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:30.013775  315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key ...
	I1119 02:33:30.013796  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key: {Name:mk9e8dbde74fbcae5bb0e966570ae4f43c6f07e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:30.014054  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem (1338 bytes)
	W1119 02:33:30.014108  315363 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657_empty.pem, impossibly tiny 0 bytes
	I1119 02:33:30.014124  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 02:33:30.014183  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:33:30.014219  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:33:30.014257  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem (1675 bytes)
	I1119 02:33:30.014318  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:33:30.014986  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:33:30.034798  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 02:33:30.054155  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:33:30.074272  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:33:30.094396  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 02:33:30.114605  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:33:30.133991  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:33:30.153105  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 02:33:30.172052  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /usr/share/ca-certificates/146572.pem (1708 bytes)
	I1119 02:33:30.194139  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:33:30.212546  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem --> /usr/share/ca-certificates/14657.pem (1338 bytes)
	I1119 02:33:30.231534  315363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:33:30.246493  315363 ssh_runner.go:195] Run: openssl version
	I1119 02:33:30.252586  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146572.pem && ln -fs /usr/share/ca-certificates/146572.pem /etc/ssl/certs/146572.pem"
	I1119 02:33:30.261620  315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146572.pem
	I1119 02:33:30.265824  315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146572.pem
	I1119 02:33:30.265886  315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146572.pem
	I1119 02:33:30.301164  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146572.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:33:30.310429  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:33:30.319818  315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:33:30.323998  315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:57 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:33:30.324046  315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:33:30.360567  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:33:30.370492  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14657.pem && ln -fs /usr/share/ca-certificates/14657.pem /etc/ssl/certs/14657.pem"
	I1119 02:33:30.380695  315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14657.pem
	I1119 02:33:30.385171  315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14657.pem
	I1119 02:33:30.385241  315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14657.pem
	I1119 02:33:30.422375  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14657.pem /etc/ssl/certs/51391683.0"
	I1119 02:33:30.432329  315363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:33:30.436333  315363 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:33:30.436432  315363 kubeadm.go:401] StartCluster: {Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:33:30.436494  315363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 02:33:30.436588  315363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:33:30.465191  315363 cri.go:89] found id: ""
	I1119 02:33:30.465255  315363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:33:30.474328  315363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:33:30.483132  315363 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:33:30.483196  315363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:33:30.491249  315363 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:33:30.491272  315363 kubeadm.go:158] found existing configuration files:
	
	I1119 02:33:30.491320  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:33:30.499072  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:33:30.499140  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:33:30.507018  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:33:30.514836  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:33:30.514890  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:33:30.523396  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:33:30.532721  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:33:30.532772  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:33:30.541409  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:33:30.550090  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:33:30.550157  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:33:30.558693  315363 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:33:30.636057  315363 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:33:30.702518  315363 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1119 02:33:31.715333  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	W1119 02:33:33.715963  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	I1119 02:33:34.216972  301934 node_ready.go:49] node "old-k8s-version-691094" is "Ready"
	I1119 02:33:34.217010  301934 node_ready.go:38] duration metric: took 14.505569399s for node "old-k8s-version-691094" to be "Ready" ...
	I1119 02:33:34.217027  301934 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:33:34.217083  301934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:33:34.235995  301934 api_server.go:72] duration metric: took 14.98160502s to wait for apiserver process to appear ...
	I1119 02:33:34.236024  301934 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:33:34.236046  301934 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 02:33:34.242612  301934 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1119 02:33:34.244469  301934 api_server.go:141] control plane version: v1.28.0
	I1119 02:33:34.244501  301934 api_server.go:131] duration metric: took 8.468136ms to wait for apiserver health ...
	I1119 02:33:34.244512  301934 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:33:34.249250  301934 system_pods.go:59] 8 kube-system pods found
	I1119 02:33:34.249293  301934 system_pods.go:61] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.249301  301934 system_pods.go:61] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.249308  301934 system_pods.go:61] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.249326  301934 system_pods.go:61] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.249331  301934 system_pods.go:61] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.249336  301934 system_pods.go:61] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.249340  301934 system_pods.go:61] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.249347  301934 system_pods.go:61] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.249389  301934 system_pods.go:74] duration metric: took 4.842718ms to wait for pod list to return data ...
	I1119 02:33:34.249403  301934 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:33:34.251979  301934 default_sa.go:45] found service account: "default"
	I1119 02:33:34.252000  301934 default_sa.go:55] duration metric: took 2.59102ms for default service account to be created ...
	I1119 02:33:34.252008  301934 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:33:34.256098  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:34.256141  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.256148  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.256155  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.256158  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.256163  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.256166  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.256169  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.256173  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.256204  301934 retry.go:31] will retry after 294.08163ms: missing components: kube-dns
	I1119 02:33:34.555117  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:34.555149  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.555155  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.555160  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.555164  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.555168  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.555171  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.555174  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.555181  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.555200  301934 retry.go:31] will retry after 239.208285ms: missing components: kube-dns
	I1119 02:33:34.801314  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:34.801356  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.801397  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.801408  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.801414  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.801421  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.801426  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.801432  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.801446  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.801465  301934 retry.go:31] will retry after 406.320974ms: missing components: kube-dns
	I1119 02:33:33.758898  307222 addons.go:515] duration metric: took 627.311179ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:33:34.007122  307222 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-483142" context rescaled to 1 replicas
	W1119 02:33:35.507777  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	I1119 02:33:35.212153  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:35.212193  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:35.212202  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:35.212208  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:35.212214  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:35.212221  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:35.212226  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:35.212230  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:35.212235  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Running
	I1119 02:33:35.212252  301934 retry.go:31] will retry after 502.533324ms: missing components: kube-dns
	I1119 02:33:35.719172  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:35.719211  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Running
	I1119 02:33:35.719220  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:35.719225  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:35.719231  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:35.719238  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:35.719243  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:35.719248  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:35.719254  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Running
	I1119 02:33:35.719267  301934 system_pods.go:126] duration metric: took 1.46725409s to wait for k8s-apps to be running ...
	I1119 02:33:35.719280  301934 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:33:35.719333  301934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:33:35.733944  301934 system_svc.go:56] duration metric: took 14.654804ms WaitForService to wait for kubelet
	I1119 02:33:35.733974  301934 kubeadm.go:587] duration metric: took 16.479589704s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:33:35.733994  301934 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:33:35.736881  301934 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:33:35.736904  301934 node_conditions.go:123] node cpu capacity is 8
	I1119 02:33:35.736917  301934 node_conditions.go:105] duration metric: took 2.917087ms to run NodePressure ...
	I1119 02:33:35.736947  301934 start.go:242] waiting for startup goroutines ...
	I1119 02:33:35.736956  301934 start.go:247] waiting for cluster config update ...
	I1119 02:33:35.736966  301934 start.go:256] writing updated cluster config ...
	I1119 02:33:35.737252  301934 ssh_runner.go:195] Run: rm -f paused
	I1119 02:33:35.741706  301934 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:33:35.746693  301934 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-bbvqz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.751796  301934 pod_ready.go:94] pod "coredns-5dd5756b68-bbvqz" is "Ready"
	I1119 02:33:35.751821  301934 pod_ready.go:86] duration metric: took 5.102077ms for pod "coredns-5dd5756b68-bbvqz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.754811  301934 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.759826  301934 pod_ready.go:94] pod "etcd-old-k8s-version-691094" is "Ready"
	I1119 02:33:35.759852  301934 pod_ready.go:86] duration metric: took 5.017899ms for pod "etcd-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.763701  301934 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.768670  301934 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-691094" is "Ready"
	I1119 02:33:35.768693  301934 pod_ready.go:86] duration metric: took 4.969901ms for pod "kube-apiserver-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.772227  301934 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.146684  301934 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-691094" is "Ready"
	I1119 02:33:36.146718  301934 pod_ready.go:86] duration metric: took 374.468133ms for pod "kube-controller-manager-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.347472  301934 pod_ready.go:83] waiting for pod "kube-proxy-79df5" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.746791  301934 pod_ready.go:94] pod "kube-proxy-79df5" is "Ready"
	I1119 02:33:36.746855  301934 pod_ready.go:86] duration metric: took 399.347819ms for pod "kube-proxy-79df5" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.946961  301934 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:37.347059  301934 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-691094" is "Ready"
	I1119 02:33:37.347090  301934 pod_ready.go:86] duration metric: took 400.10454ms for pod "kube-scheduler-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:37.347108  301934 pod_ready.go:40] duration metric: took 1.605370699s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:33:37.406793  301934 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1119 02:33:37.408685  301934 out.go:203] 
	W1119 02:33:37.410052  301934 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 02:33:37.411691  301934 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 02:33:37.413481  301934 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-691094" cluster and "default" namespace by default
	W1119 02:33:37.511440  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	W1119 02:33:40.007282  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	I1119 02:33:42.519187  315363 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:33:42.519270  315363 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:33:42.519471  315363 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:33:42.519558  315363 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:33:42.519641  315363 kubeadm.go:319] OS: Linux
	I1119 02:33:42.519723  315363 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:33:42.519793  315363 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:33:42.519863  315363 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:33:42.519937  315363 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:33:42.520011  315363 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:33:42.520082  315363 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:33:42.520161  315363 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:33:42.520246  315363 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:33:42.520396  315363 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:33:42.520528  315363 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:33:42.520640  315363 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:33:42.520739  315363 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:33:42.522619  315363 out.go:252]   - Generating certificates and keys ...
	I1119 02:33:42.522717  315363 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:33:42.522778  315363 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:33:42.522841  315363 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:33:42.522898  315363 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:33:42.522948  315363 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:33:42.522986  315363 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:33:42.523065  315363 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:33:42.523231  315363 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-168452 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1119 02:33:42.523301  315363 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:33:42.523451  315363 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-168452 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1119 02:33:42.523527  315363 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:33:42.523599  315363 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:33:42.523658  315363 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:33:42.523737  315363 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:33:42.523787  315363 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:33:42.523833  315363 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:33:42.523879  315363 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:33:42.523945  315363 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:33:42.524004  315363 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:33:42.524082  315363 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:33:42.524137  315363 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:33:42.525751  315363 out.go:252]   - Booting up control plane ...
	I1119 02:33:42.525831  315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:33:42.525893  315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:33:42.525997  315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:33:42.526121  315363 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:33:42.526235  315363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:33:42.526323  315363 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:33:42.526401  315363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:33:42.526441  315363 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:33:42.526546  315363 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:33:42.526633  315363 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:33:42.526684  315363 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001668097s
	I1119 02:33:42.526759  315363 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:33:42.526828  315363 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1119 02:33:42.526912  315363 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:33:42.526979  315363 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:33:42.527060  315363 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.143588684s
	I1119 02:33:42.527116  315363 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.751163591s
	I1119 02:33:42.527185  315363 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002351229s
	I1119 02:33:42.527279  315363 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:33:42.527418  315363 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:33:42.527475  315363 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:33:42.527642  315363 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-168452 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:33:42.527698  315363 kubeadm.go:319] [bootstrap-token] Using token: f9q4qi.t8dfm2zfbs2z2sgs
	I1119 02:33:42.529100  315363 out.go:252]   - Configuring RBAC rules ...
	I1119 02:33:42.529232  315363 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:33:42.529348  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:33:42.529576  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:33:42.529779  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:33:42.529949  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:33:42.530070  315363 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:33:42.530217  315363 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:33:42.530321  315363 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:33:42.530403  315363 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:33:42.530413  315363 kubeadm.go:319] 
	I1119 02:33:42.530492  315363 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:33:42.530502  315363 kubeadm.go:319] 
	I1119 02:33:42.530604  315363 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:33:42.530618  315363 kubeadm.go:319] 
	I1119 02:33:42.530647  315363 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:33:42.530726  315363 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:33:42.530797  315363 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:33:42.530809  315363 kubeadm.go:319] 
	I1119 02:33:42.530880  315363 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:33:42.530885  315363 kubeadm.go:319] 
	I1119 02:33:42.530954  315363 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:33:42.530981  315363 kubeadm.go:319] 
	I1119 02:33:42.531052  315363 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:33:42.531164  315363 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:33:42.531261  315363 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:33:42.531271  315363 kubeadm.go:319] 
	I1119 02:33:42.531424  315363 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:33:42.531551  315363 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:33:42.531570  315363 kubeadm.go:319] 
	I1119 02:33:42.531690  315363 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token f9q4qi.t8dfm2zfbs2z2sgs \
	I1119 02:33:42.531850  315363 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a \
	I1119 02:33:42.531878  315363 kubeadm.go:319] 	--control-plane 
	I1119 02:33:42.531885  315363 kubeadm.go:319] 
	I1119 02:33:42.531966  315363 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:33:42.531972  315363 kubeadm.go:319] 
	I1119 02:33:42.532046  315363 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f9q4qi.t8dfm2zfbs2z2sgs \
	I1119 02:33:42.532149  315363 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a 
	I1119 02:33:42.532161  315363 cni.go:84] Creating CNI manager for ""
	I1119 02:33:42.532167  315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:42.535194  315363 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:33:42.536650  315363 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:33:42.541710  315363 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:33:42.541734  315363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:33:42.556040  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:33:42.817018  315363 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:33:42.817147  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:42.817217  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-168452 minikube.k8s.io/updated_at=2025_11_19T02_33_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=embed-certs-168452 minikube.k8s.io/primary=true
	I1119 02:33:42.828812  315363 ops.go:34] apiserver oom_adj: -16
	I1119 02:33:42.896633  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:43.396810  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:43.896801  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:44.397677  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1119 02:33:46.450455  208368 system_pods.go:55] pod list returned error: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
	I1119 02:33:46.452233  208368 out.go:203] 
	W1119 02:33:46.453522  208368 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for system pods: apiserver never returned a pod list
	W1119 02:33:46.453544  208368 out.go:285] * 
	W1119 02:33:46.455831  208368 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:33:46.457044  208368 out.go:203] 
	W1119 02:33:42.007484  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	W1119 02:33:44.007813  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	W1119 02:33:46.008192  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	I1119 02:33:44.897377  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:45.397137  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:45.897616  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:46.397448  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:46.896710  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:47.397632  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:47.897150  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:48.003028  315363 kubeadm.go:1114] duration metric: took 5.18596901s to wait for elevateKubeSystemPrivileges
	I1119 02:33:48.003056  315363 kubeadm.go:403] duration metric: took 17.566632128s to StartCluster
	I1119 02:33:48.003071  315363 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:48.003125  315363 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:33:48.005668  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:48.005964  315363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:33:48.005984  315363 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:33:48.006098  315363 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:33:48.006191  315363 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-168452"
	I1119 02:33:48.006211  315363 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-168452"
	I1119 02:33:48.006209  315363 addons.go:70] Setting default-storageclass=true in profile "embed-certs-168452"
	I1119 02:33:48.006218  315363 config.go:182] Loaded profile config "embed-certs-168452": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:48.006231  315363 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-168452"
	I1119 02:33:48.006249  315363 host.go:66] Checking if "embed-certs-168452" exists ...
	I1119 02:33:48.006692  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:48.006819  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:48.007901  315363 out.go:179] * Verifying Kubernetes components...
	I1119 02:33:48.009142  315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:48.032568  315363 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:33:48.032594  315363 addons.go:239] Setting addon default-storageclass=true in "embed-certs-168452"
	I1119 02:33:48.032649  315363 host.go:66] Checking if "embed-certs-168452" exists ...
	I1119 02:33:48.033140  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:48.034177  315363 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:48.034248  315363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:33:48.034332  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:48.063775  315363 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:48.063802  315363 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:33:48.063864  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:48.067763  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:48.088481  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:48.118977  315363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:33:48.181811  315363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:48.192106  315363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:48.217510  315363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:48.350174  315363 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1119 02:33:48.351838  315363 node_ready.go:35] waiting up to 6m0s for node "embed-certs-168452" to be "Ready" ...
	I1119 02:33:48.575859  315363 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:33:48.577031  315363 addons.go:515] duration metric: took 570.934719ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:33:48.855157  315363 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-168452" context rescaled to 1 replicas
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	365ce1a4e43ef       56cc512116c8f       10 seconds ago      Running             busybox                   0                   84edcb21162b2       busybox                                          default
	1e139eec825de       ead0a4a53df89       16 seconds ago      Running             coredns                   0                   06ddb433194da       coredns-5dd5756b68-bbvqz                         kube-system
	e773989cb5b97       6e38f40d628db       16 seconds ago      Running             storage-provisioner       0                   600c711a387b1       storage-provisioner                              kube-system
	dda3cde60adce       409467f978b4a       27 seconds ago      Running             kindnet-cni               0                   041636111f700       kindnet-b9cwh                                    kube-system
	5dde09d6b5534       ea1030da44aa1       31 seconds ago      Running             kube-proxy                0                   03988ca85cd54       kube-proxy-79df5                                 kube-system
	ae40aa345e79c       4be79c38a4bab       49 seconds ago      Running             kube-controller-manager   0                   12de271987c00       kube-controller-manager-old-k8s-version-691094   kube-system
	b77b79fa6a466       f6f496300a2ae       49 seconds ago      Running             kube-scheduler            0                   6f3bdd55a5e5d       kube-scheduler-old-k8s-version-691094            kube-system
	dbc14fc0cc43a       73deb9a3f7025       49 seconds ago      Running             etcd                      0                   725875976c48d       etcd-old-k8s-version-691094                      kube-system
	2710c5af3eee6       bb5e0dde9054c       49 seconds ago      Running             kube-apiserver            0                   d1a4659f2bb05       kube-apiserver-old-k8s-version-691094            kube-system
	
	
	==> containerd <==
	Nov 19 02:33:34 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:34.230807866Z" level=info msg="StartContainer for \"e773989cb5b9719f34c18e9670a458f821a72e3b0c1f48c1667978ae16fa12a4\""
	Nov 19 02:33:34 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:34.232491388Z" level=info msg="connecting to shim e773989cb5b9719f34c18e9670a458f821a72e3b0c1f48c1667978ae16fa12a4" address="unix:///run/containerd/s/24b671fd6ae6e1c46e5997e6e8fbc89d9c643c0b983828d1d7f18ff2d3ba023f" protocol=ttrpc version=3
	Nov 19 02:33:34 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:34.234021335Z" level=info msg="Container 1e139eec825de9114abd6701b9ab42ee2b8ab9b766ece6ead08550a8ad647722: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 02:33:34 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:34.243041735Z" level=info msg="CreateContainer within sandbox \"06ddb433194dae11f9f24856f079619dc43b22d6efbf43415d290df94aba9325\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e139eec825de9114abd6701b9ab42ee2b8ab9b766ece6ead08550a8ad647722\""
	Nov 19 02:33:34 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:34.244135610Z" level=info msg="StartContainer for \"1e139eec825de9114abd6701b9ab42ee2b8ab9b766ece6ead08550a8ad647722\""
	Nov 19 02:33:34 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:34.245331761Z" level=info msg="connecting to shim 1e139eec825de9114abd6701b9ab42ee2b8ab9b766ece6ead08550a8ad647722" address="unix:///run/containerd/s/03135e70f496d3ee336cea3910b2f223365bccd66022f2def8a237460898a081" protocol=ttrpc version=3
	Nov 19 02:33:34 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:34.294883900Z" level=info msg="StartContainer for \"e773989cb5b9719f34c18e9670a458f821a72e3b0c1f48c1667978ae16fa12a4\" returns successfully"
	Nov 19 02:33:34 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:34.299330808Z" level=info msg="StartContainer for \"1e139eec825de9114abd6701b9ab42ee2b8ab9b766ece6ead08550a8ad647722\" returns successfully"
	Nov 19 02:33:37 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:37.944509536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:90639f81-cb90-45ed-a6f9-0112e27e5bcb,Namespace:default,Attempt:0,}"
	Nov 19 02:33:37 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:37.998189490Z" level=info msg="connecting to shim 84edcb21162b28e6c4334781fde734e7818dc08098d3e7b6f9bebcbdd7484a77" address="unix:///run/containerd/s/8aad71720f1e0fd951e14bd3c26cd9557b67ca3cc26df8334d136754eab93e47" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 02:33:38 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:38.079356801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:90639f81-cb90-45ed-a6f9-0112e27e5bcb,Namespace:default,Attempt:0,} returns sandbox id \"84edcb21162b28e6c4334781fde734e7818dc08098d3e7b6f9bebcbdd7484a77\""
	Nov 19 02:33:38 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:38.081263124Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.375425944Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.376680652Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396644"
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.377796397Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.380436656Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.380913727Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.299607364s"
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.380948526Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.382715067Z" level=info msg="CreateContainer within sandbox \"84edcb21162b28e6c4334781fde734e7818dc08098d3e7b6f9bebcbdd7484a77\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.390767649Z" level=info msg="Container 365ce1a4e43ef3582dc9c7cdeac6a51a19501124788117bafd9ac6879a6e8f3e: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.397566913Z" level=info msg="CreateContainer within sandbox \"84edcb21162b28e6c4334781fde734e7818dc08098d3e7b6f9bebcbdd7484a77\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"365ce1a4e43ef3582dc9c7cdeac6a51a19501124788117bafd9ac6879a6e8f3e\""
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.398187959Z" level=info msg="StartContainer for \"365ce1a4e43ef3582dc9c7cdeac6a51a19501124788117bafd9ac6879a6e8f3e\""
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.399054796Z" level=info msg="connecting to shim 365ce1a4e43ef3582dc9c7cdeac6a51a19501124788117bafd9ac6879a6e8f3e" address="unix:///run/containerd/s/8aad71720f1e0fd951e14bd3c26cd9557b67ca3cc26df8334d136754eab93e47" protocol=ttrpc version=3
	Nov 19 02:33:40 old-k8s-version-691094 containerd[658]: time="2025-11-19T02:33:40.449054855Z" level=info msg="StartContainer for \"365ce1a4e43ef3582dc9c7cdeac6a51a19501124788117bafd9ac6879a6e8f3e\" returns successfully"
	Nov 19 02:33:47 old-k8s-version-691094 containerd[658]: E1119 02:33:47.716892     658 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [1e139eec825de9114abd6701b9ab42ee2b8ab9b766ece6ead08550a8ad647722] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41644 - 1915 "HINFO IN 1315140230493656931.2438502800312971411. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015921461s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-691094
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-691094
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=old-k8s-version-691094
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_33_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:33:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-691094
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:33:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:33:37 +0000   Wed, 19 Nov 2025 02:33:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:33:37 +0000   Wed, 19 Nov 2025 02:33:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:33:37 +0000   Wed, 19 Nov 2025 02:33:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:33:37 +0000   Wed, 19 Nov 2025 02:33:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-691094
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                3f7ebf4a-3703-452a-b0e3-7f24129d6ff8
	  Boot ID:                    fea1659d-b751-4f87-a281-819adf52de2d
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-bbvqz                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     31s
	  kube-system                 etcd-old-k8s-version-691094                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kindnet-b9cwh                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-old-k8s-version-691094             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-old-k8s-version-691094    200m (2%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-79df5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-old-k8s-version-691094             100m (1%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 31s                kube-proxy       
	  Normal  Starting                 50s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s (x8 over 50s)  kubelet          Node old-k8s-version-691094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 50s)  kubelet          Node old-k8s-version-691094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x7 over 50s)  kubelet          Node old-k8s-version-691094 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  49s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  43s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  43s                kubelet          Node old-k8s-version-691094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s                kubelet          Node old-k8s-version-691094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s                kubelet          Node old-k8s-version-691094 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s                node-controller  Node old-k8s-version-691094 event: Registered Node old-k8s-version-691094 in Controller
	  Normal  NodeReady                17s                kubelet          Node old-k8s-version-691094 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[Nov19 02:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 74 0c d7 a6 53 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[ +28.680399] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 e9 7c 92 36 13 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[Nov19 02:32] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	[  +4.552839] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +11.086189] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 76 d1 26 7f 3d 08 06
	[  +0.000377] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[  +9.270754] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a2 49 fd 34 51 3b 08 06
	[  +0.000702] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +23.593864] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 86 43 5f 18 4c 08 06
	[  +0.000495] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	
	
	==> etcd [dbc14fc0cc43a9945343d07a4033d270d1157c5a3b861d1386847247f42a1497] <==
	{"level":"info","ts":"2025-11-19T02:33:02.033025Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-11-19T02:33:02.040102Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T02:33:02.040299Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T02:33:02.04213Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T02:33:02.042472Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-19T02:33:04.985971Z","caller":"traceutil/trace.go:171","msg":"trace[658045218] transaction","detail":"{read_only:false; response_revision:186; number_of_response:1; }","duration":"139.194171ms","start":"2025-11-19T02:33:04.846735Z","end":"2025-11-19T02:33:04.98593Z","steps":["trace[658045218] 'process raft request'  (duration: 56.292983ms)","trace[658045218] 'compare'  (duration: 82.76773ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:33:17.810576Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.742417ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/disruption-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T02:33:17.810663Z","caller":"traceutil/trace.go:171","msg":"trace[1420202652] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/disruption-controller; range_end:; response_count:0; response_revision:321; }","duration":"138.866997ms","start":"2025-11-19T02:33:17.671781Z","end":"2025-11-19T02:33:17.810648Z","steps":["trace[1420202652] 'range keys from in-memory index tree'  (duration: 138.650303ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:33:18.046425Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.242001ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T02:33:18.046593Z","caller":"traceutil/trace.go:171","msg":"trace[71002672] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:0; response_revision:322; }","duration":"124.423394ms","start":"2025-11-19T02:33:17.922148Z","end":"2025-11-19T02:33:18.046571Z","steps":["trace[71002672] 'range keys from in-memory index tree'  (duration: 124.156675ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:33:18.259188Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.027933ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-691094\" ","response":"range_response_count:1 size:4323"}
	{"level":"info","ts":"2025-11-19T02:33:18.259309Z","caller":"traceutil/trace.go:171","msg":"trace[1217585489] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-old-k8s-version-691094; range_end:; response_count:1; response_revision:323; }","duration":"111.181979ms","start":"2025-11-19T02:33:18.148101Z","end":"2025-11-19T02:33:18.259282Z","steps":["trace[1217585489] 'range keys from in-memory index tree'  (duration: 110.919931ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:18.388325Z","caller":"traceutil/trace.go:171","msg":"trace[907749827] transaction","detail":"{read_only:false; response_revision:325; number_of_response:1; }","duration":"121.659188ms","start":"2025-11-19T02:33:18.266633Z","end":"2025-11-19T02:33:18.388292Z","steps":["trace[907749827] 'process raft request'  (duration: 100.125362ms)","trace[907749827] 'compare'  (duration: 21.381062ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:33:18.455915Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.143906ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-19T02:33:18.455992Z","caller":"traceutil/trace.go:171","msg":"trace[960144194] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:327; }","duration":"117.233633ms","start":"2025-11-19T02:33:18.338743Z","end":"2025-11-19T02:33:18.455976Z","steps":["trace[960144194] 'agreement among raft nodes before linearized reading'  (duration: 117.101216ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:33:18.45598Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.711295ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" ","response":"range_response_count:1 size:203"}
	{"level":"info","ts":"2025-11-19T02:33:18.45604Z","caller":"traceutil/trace.go:171","msg":"trace[783663048] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:327; }","duration":"129.779137ms","start":"2025-11-19T02:33:18.326242Z","end":"2025-11-19T02:33:18.456021Z","steps":["trace[783663048] 'agreement among raft nodes before linearized reading'  (duration: 129.651787ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:24.321121Z","caller":"traceutil/trace.go:171","msg":"trace[1897328613] linearizableReadLoop","detail":"{readStateIndex:425; appliedIndex:424; }","duration":"106.777468ms","start":"2025-11-19T02:33:24.214325Z","end":"2025-11-19T02:33:24.321102Z","steps":["trace[1897328613] 'read index received'  (duration: 106.680216ms)","trace[1897328613] 'applied index is now lower than readState.Index'  (duration: 96.455µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:33:24.321178Z","caller":"traceutil/trace.go:171","msg":"trace[1221524606] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"145.359127ms","start":"2025-11-19T02:33:24.175789Z","end":"2025-11-19T02:33:24.321148Z","steps":["trace[1221524606] 'process raft request'  (duration: 145.189798ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:33:24.321267Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.932695ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-691094\" ","response":"range_response_count:1 size:4739"}
	{"level":"info","ts":"2025-11-19T02:33:24.321304Z","caller":"traceutil/trace.go:171","msg":"trace[1542339758] range","detail":"{range_begin:/registry/minions/old-k8s-version-691094; range_end:; response_count:1; response_revision:411; }","duration":"107.003966ms","start":"2025-11-19T02:33:24.21429Z","end":"2025-11-19T02:33:24.321294Z","steps":["trace[1542339758] 'agreement among raft nodes before linearized reading'  (duration: 106.897787ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:24.391988Z","caller":"traceutil/trace.go:171","msg":"trace[537055186] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"105.729066ms","start":"2025-11-19T02:33:24.286236Z","end":"2025-11-19T02:33:24.391965Z","steps":["trace[537055186] 'process raft request'  (duration: 105.588299ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:24.54902Z","caller":"traceutil/trace.go:171","msg":"trace[463438125] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"219.879541ms","start":"2025-11-19T02:33:24.329104Z","end":"2025-11-19T02:33:24.548984Z","steps":["trace[463438125] 'process raft request'  (duration: 199.511127ms)","trace[463438125] 'compare'  (duration: 20.266985ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:33:24.879054Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.153166ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-691094\" ","response":"range_response_count:1 size:4739"}
	{"level":"info","ts":"2025-11-19T02:33:24.879128Z","caller":"traceutil/trace.go:171","msg":"trace[886070201] range","detail":"{range_begin:/registry/minions/old-k8s-version-691094; range_end:; response_count:1; response_revision:413; }","duration":"165.246043ms","start":"2025-11-19T02:33:24.713866Z","end":"2025-11-19T02:33:24.879112Z","steps":["trace[886070201] 'range keys from in-memory index tree'  (duration: 165.042303ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:33:50 up  1:16,  0 user,  load average: 5.31, 3.83, 2.54
	Linux old-k8s-version-691094 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dda3cde60adcefe6dc905f202c5021fdb56f1c94c37adce1fdae5c18d6080acc] <==
	I1119 02:33:23.381383       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:33:23.381729       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1119 02:33:23.381928       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:33:23.381949       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:33:23.381981       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:33:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:33:23.680627       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:33:23.680906       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:33:23.680921       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:33:23.780187       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:33:24.081044       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:33:24.081082       1 metrics.go:72] Registering metrics
	I1119 02:33:24.081144       1 controller.go:711] "Syncing nftables rules"
	I1119 02:33:33.680704       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:33:33.680751       1 main.go:301] handling current node
	I1119 02:33:43.681343       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:33:43.681445       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2710c5af3eee6491ef45de25344cda5fa8a6bddc3604a03908e7ec36cc3ec259] <==
	I1119 02:33:03.449188       1 shared_informer.go:318] Caches are synced for configmaps
	I1119 02:33:03.450069       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1119 02:33:03.451682       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1119 02:33:03.451712       1 aggregator.go:166] initial CRD sync complete...
	I1119 02:33:03.451721       1 autoregister_controller.go:141] Starting autoregister controller
	I1119 02:33:03.451728       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:33:03.451870       1 cache.go:39] Caches are synced for autoregister controller
	I1119 02:33:03.451900       1 controller.go:624] quota admission added evaluator for: namespaces
	I1119 02:33:03.454460       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1119 02:33:03.652666       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:33:04.364452       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 02:33:04.370792       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 02:33:04.370811       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:33:05.242226       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:33:05.293279       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:33:05.360305       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 02:33:05.367019       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1119 02:33:05.368321       1 controller.go:624] quota admission added evaluator for: endpoints
	I1119 02:33:05.374006       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:33:05.419797       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 02:33:06.994734       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 02:33:07.008440       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 02:33:07.022009       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1119 02:33:18.778955       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1119 02:33:19.128034       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [ae40aa345e79cbe278439afee2a5038c48c1ac05f3405d97259e5af73e3fbf92] <==
	I1119 02:33:18.479638       1 shared_informer.go:318] Caches are synced for resource quota
	I1119 02:33:18.481935       1 shared_informer.go:318] Caches are synced for attach detach
	I1119 02:33:18.565340       1 shared_informer.go:318] Caches are synced for resource quota
	I1119 02:33:18.783277       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1119 02:33:18.883719       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 02:33:18.927024       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 02:33:18.927059       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 02:33:19.141958       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-79df5"
	I1119 02:33:19.147560       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-b9cwh"
	I1119 02:33:19.292248       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-hbwsw"
	I1119 02:33:19.320651       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-bbvqz"
	I1119 02:33:19.334988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="551.937804ms"
	I1119 02:33:19.346766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.713541ms"
	I1119 02:33:19.347224       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.884µs"
	I1119 02:33:19.347583       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="248.565µs"
	I1119 02:33:19.743275       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1119 02:33:19.759521       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-hbwsw"
	I1119 02:33:19.767623       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="24.389324ms"
	I1119 02:33:19.777179       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.488995ms"
	I1119 02:33:19.777312       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.241µs"
	I1119 02:33:33.782164       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="109.659µs"
	I1119 02:33:33.799126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="108.486µs"
	I1119 02:33:35.226953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.284192ms"
	I1119 02:33:35.227058       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.818µs"
	I1119 02:33:38.373535       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [5dde09d6b5534707795709157ee81edeb05e31172278aaf5526347ba15edf149] <==
	I1119 02:33:19.808172       1 server_others.go:69] "Using iptables proxy"
	I1119 02:33:19.820176       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1119 02:33:19.845599       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:33:19.848312       1 server_others.go:152] "Using iptables Proxier"
	I1119 02:33:19.848362       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 02:33:19.848394       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 02:33:19.848428       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 02:33:19.848742       1 server.go:846] "Version info" version="v1.28.0"
	I1119 02:33:19.848757       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:33:19.849540       1 config.go:97] "Starting endpoint slice config controller"
	I1119 02:33:19.849569       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 02:33:19.849599       1 config.go:188] "Starting service config controller"
	I1119 02:33:19.849621       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 02:33:19.849806       1 config.go:315] "Starting node config controller"
	I1119 02:33:19.849822       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 02:33:19.949953       1 shared_informer.go:318] Caches are synced for node config
	I1119 02:33:19.949980       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1119 02:33:19.949995       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [b77b79fa6a466aa3e18c8bd7eba3c607337982e750126d443bc923b253db1773] <==
	W1119 02:33:04.399499       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1119 02:33:04.399543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1119 02:33:04.424049       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1119 02:33:04.424093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1119 02:33:04.458386       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1119 02:33:04.458837       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1119 02:33:04.470115       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1119 02:33:04.470164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1119 02:33:04.561554       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1119 02:33:04.561594       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1119 02:33:04.673671       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1119 02:33:04.673712       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1119 02:33:04.688034       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1119 02:33:04.688077       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1119 02:33:04.688037       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1119 02:33:04.688108       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 02:33:04.689780       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1119 02:33:04.689824       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1119 02:33:04.704173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1119 02:33:04.704221       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1119 02:33:04.736082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1119 02:33:04.736401       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1119 02:33:04.770743       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1119 02:33:04.770839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1119 02:33:06.915816       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 02:33:18 old-k8s-version-691094 kubelet[1520]: I1119 02:33:18.503691    1520 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.150157    1520 topology_manager.go:215] "Topology Admit Handler" podUID="d23dd2d3-6511-45fb-ae70-d1da7b9b6b28" podNamespace="kube-system" podName="kube-proxy-79df5"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.159071    1520 topology_manager.go:215] "Topology Admit Handler" podUID="3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352" podNamespace="kube-system" podName="kindnet-b9cwh"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.262847    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352-xtables-lock\") pod \"kindnet-b9cwh\" (UID: \"3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352\") " pod="kube-system/kindnet-b9cwh"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.262970    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352-lib-modules\") pod \"kindnet-b9cwh\" (UID: \"3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352\") " pod="kube-system/kindnet-b9cwh"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.263115    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88q7s\" (UniqueName: \"kubernetes.io/projected/3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352-kube-api-access-88q7s\") pod \"kindnet-b9cwh\" (UID: \"3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352\") " pod="kube-system/kindnet-b9cwh"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.263270    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d23dd2d3-6511-45fb-ae70-d1da7b9b6b28-kube-proxy\") pod \"kube-proxy-79df5\" (UID: \"d23dd2d3-6511-45fb-ae70-d1da7b9b6b28\") " pod="kube-system/kube-proxy-79df5"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.263312    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nccg9\" (UniqueName: \"kubernetes.io/projected/d23dd2d3-6511-45fb-ae70-d1da7b9b6b28-kube-api-access-nccg9\") pod \"kube-proxy-79df5\" (UID: \"d23dd2d3-6511-45fb-ae70-d1da7b9b6b28\") " pod="kube-system/kube-proxy-79df5"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.263480    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352-cni-cfg\") pod \"kindnet-b9cwh\" (UID: \"3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352\") " pod="kube-system/kindnet-b9cwh"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.263516    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d23dd2d3-6511-45fb-ae70-d1da7b9b6b28-xtables-lock\") pod \"kube-proxy-79df5\" (UID: \"d23dd2d3-6511-45fb-ae70-d1da7b9b6b28\") " pod="kube-system/kube-proxy-79df5"
	Nov 19 02:33:19 old-k8s-version-691094 kubelet[1520]: I1119 02:33:19.263683    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d23dd2d3-6511-45fb-ae70-d1da7b9b6b28-lib-modules\") pod \"kube-proxy-79df5\" (UID: \"d23dd2d3-6511-45fb-ae70-d1da7b9b6b28\") " pod="kube-system/kube-proxy-79df5"
	Nov 19 02:33:20 old-k8s-version-691094 kubelet[1520]: I1119 02:33:20.171191    1520 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-79df5" podStartSLOduration=1.171130716 podCreationTimestamp="2025-11-19 02:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:20.171028011 +0000 UTC m=+13.209359984" watchObservedRunningTime="2025-11-19 02:33:20.171130716 +0000 UTC m=+13.209462689"
	Nov 19 02:33:33 old-k8s-version-691094 kubelet[1520]: I1119 02:33:33.749213    1520 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 19 02:33:33 old-k8s-version-691094 kubelet[1520]: I1119 02:33:33.782895    1520 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-b9cwh" podStartSLOduration=11.682944737 podCreationTimestamp="2025-11-19 02:33:19 +0000 UTC" firstStartedPulling="2025-11-19 02:33:19.906795407 +0000 UTC m=+12.945127373" lastFinishedPulling="2025-11-19 02:33:23.006683769 +0000 UTC m=+16.045015737" observedRunningTime="2025-11-19 02:33:24.325010453 +0000 UTC m=+17.363342437" watchObservedRunningTime="2025-11-19 02:33:33.782833101 +0000 UTC m=+26.821165074"
	Nov 19 02:33:33 old-k8s-version-691094 kubelet[1520]: I1119 02:33:33.784096    1520 topology_manager.go:215] "Topology Admit Handler" podUID="56c0e21e-9d86-46c6-bc02-2a75554c0f07" podNamespace="kube-system" podName="coredns-5dd5756b68-bbvqz"
	Nov 19 02:33:33 old-k8s-version-691094 kubelet[1520]: I1119 02:33:33.784305    1520 topology_manager.go:215] "Topology Admit Handler" podUID="135636ea-f34f-4bfc-b2f6-cbbf3e91ca30" podNamespace="kube-system" podName="storage-provisioner"
	Nov 19 02:33:33 old-k8s-version-691094 kubelet[1520]: I1119 02:33:33.865360    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56c0e21e-9d86-46c6-bc02-2a75554c0f07-config-volume\") pod \"coredns-5dd5756b68-bbvqz\" (UID: \"56c0e21e-9d86-46c6-bc02-2a75554c0f07\") " pod="kube-system/coredns-5dd5756b68-bbvqz"
	Nov 19 02:33:33 old-k8s-version-691094 kubelet[1520]: I1119 02:33:33.865506    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf7fj\" (UniqueName: \"kubernetes.io/projected/135636ea-f34f-4bfc-b2f6-cbbf3e91ca30-kube-api-access-sf7fj\") pod \"storage-provisioner\" (UID: \"135636ea-f34f-4bfc-b2f6-cbbf3e91ca30\") " pod="kube-system/storage-provisioner"
	Nov 19 02:33:33 old-k8s-version-691094 kubelet[1520]: I1119 02:33:33.865599    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnwp6\" (UniqueName: \"kubernetes.io/projected/56c0e21e-9d86-46c6-bc02-2a75554c0f07-kube-api-access-rnwp6\") pod \"coredns-5dd5756b68-bbvqz\" (UID: \"56c0e21e-9d86-46c6-bc02-2a75554c0f07\") " pod="kube-system/coredns-5dd5756b68-bbvqz"
	Nov 19 02:33:33 old-k8s-version-691094 kubelet[1520]: I1119 02:33:33.865640    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/135636ea-f34f-4bfc-b2f6-cbbf3e91ca30-tmp\") pod \"storage-provisioner\" (UID: \"135636ea-f34f-4bfc-b2f6-cbbf3e91ca30\") " pod="kube-system/storage-provisioner"
	Nov 19 02:33:35 old-k8s-version-691094 kubelet[1520]: I1119 02:33:35.207665    1520 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.207611574 podCreationTimestamp="2025-11-19 02:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:35.207180919 +0000 UTC m=+28.245512895" watchObservedRunningTime="2025-11-19 02:33:35.207611574 +0000 UTC m=+28.245943607"
	Nov 19 02:33:37 old-k8s-version-691094 kubelet[1520]: I1119 02:33:37.634226    1520 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-bbvqz" podStartSLOduration=18.634166172 podCreationTimestamp="2025-11-19 02:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:35.220124215 +0000 UTC m=+28.258456188" watchObservedRunningTime="2025-11-19 02:33:37.634166172 +0000 UTC m=+30.672498146"
	Nov 19 02:33:37 old-k8s-version-691094 kubelet[1520]: I1119 02:33:37.634483    1520 topology_manager.go:215] "Topology Admit Handler" podUID="90639f81-cb90-45ed-a6f9-0112e27e5bcb" podNamespace="default" podName="busybox"
	Nov 19 02:33:37 old-k8s-version-691094 kubelet[1520]: I1119 02:33:37.690929    1520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7znp\" (UniqueName: \"kubernetes.io/projected/90639f81-cb90-45ed-a6f9-0112e27e5bcb-kube-api-access-f7znp\") pod \"busybox\" (UID: \"90639f81-cb90-45ed-a6f9-0112e27e5bcb\") " pod="default/busybox"
	Nov 19 02:33:41 old-k8s-version-691094 kubelet[1520]: I1119 02:33:41.220011    1520 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.919660586 podCreationTimestamp="2025-11-19 02:33:37 +0000 UTC" firstStartedPulling="2025-11-19 02:33:38.080932747 +0000 UTC m=+31.119264714" lastFinishedPulling="2025-11-19 02:33:40.381236106 +0000 UTC m=+33.419568070" observedRunningTime="2025-11-19 02:33:41.219704244 +0000 UTC m=+34.258036238" watchObservedRunningTime="2025-11-19 02:33:41.219963942 +0000 UTC m=+34.258295913"
	
	
	==> storage-provisioner [e773989cb5b9719f34c18e9670a458f821a72e3b0c1f48c1667978ae16fa12a4] <==
	I1119 02:33:34.305768       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:33:34.314850       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:33:34.314906       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1119 02:33:34.323210       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:33:34.323287       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"136e4121-044b-4dee-aaad-3e5583b7c2c1", APIVersion:"v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-691094_76aba13c-9d9b-4e98-851a-12b3d58d7b2a became leader
	I1119 02:33:34.323354       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-691094_76aba13c-9d9b-4e98-851a-12b3d58d7b2a!
	I1119 02:33:34.423715       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-691094_76aba13c-9d9b-4e98-851a-12b3d58d7b2a!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-691094 -n old-k8s-version-691094
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-691094 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (14.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-483142 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [90b24763-24ed-4631-9502-e0fab55d3520] Pending
helpers_test.go:352: "busybox" [90b24763-24ed-4631-9502-e0fab55d3520] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [90b24763-24ed-4631-9502-e0fab55d3520] Running
E1119 02:33:54.422184   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003417603s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-483142 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-483142
helpers_test.go:243: (dbg) docker inspect no-preload-483142:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aac37d788f49958ff52dac6090e9aa2cd11fb1e54edad896420bf3f5e737a0af",
	        "Created": "2025-11-19T02:32:57.689763981Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 307739,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:32:57.726804627Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/aac37d788f49958ff52dac6090e9aa2cd11fb1e54edad896420bf3f5e737a0af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aac37d788f49958ff52dac6090e9aa2cd11fb1e54edad896420bf3f5e737a0af/hostname",
	        "HostsPath": "/var/lib/docker/containers/aac37d788f49958ff52dac6090e9aa2cd11fb1e54edad896420bf3f5e737a0af/hosts",
	        "LogPath": "/var/lib/docker/containers/aac37d788f49958ff52dac6090e9aa2cd11fb1e54edad896420bf3f5e737a0af/aac37d788f49958ff52dac6090e9aa2cd11fb1e54edad896420bf3f5e737a0af-json.log",
	        "Name": "/no-preload-483142",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-483142:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-483142",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "aac37d788f49958ff52dac6090e9aa2cd11fb1e54edad896420bf3f5e737a0af",
	                "LowerDir": "/var/lib/docker/overlay2/bb47bc624d7fa5f37525774a3ae9ce17221988bf05df4a93a6cf6eb317eb354d-init/diff:/var/lib/docker/overlay2/de7938e6a920c133c8c6b988444cfbf6706fdc6982445229ca70e2488a725edb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb47bc624d7fa5f37525774a3ae9ce17221988bf05df4a93a6cf6eb317eb354d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb47bc624d7fa5f37525774a3ae9ce17221988bf05df4a93a6cf6eb317eb354d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb47bc624d7fa5f37525774a3ae9ce17221988bf05df4a93a6cf6eb317eb354d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-483142",
	                "Source": "/var/lib/docker/volumes/no-preload-483142/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-483142",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-483142",
	                "name.minikube.sigs.k8s.io": "no-preload-483142",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c4ed69048ab3ecdf23b4ad8f556ed685f28d61dfda86ca9e242cdd1d08140c5a",
	            "SandboxKey": "/var/run/docker/netns/c4ed69048ab3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-483142": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c1155ea75a9420d95538eab4308c63c41e9b9b6daf36899badbd1b70df2e1f7a",
	                    "EndpointID": "58d34fb6835283be599234219418cf59aeb02160d91eb2865b3d13090e612999",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "8a:e5:53:d9:83:73",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-483142",
	                        "aac37d788f49"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-483142 -n no-preload-483142
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-483142 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-483142 logs -n 25: (1.075762291s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-212776 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                        │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/kubernetes/kubelet.conf                                                                                                       │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /var/lib/kubelet/config.yaml                                                                                                       │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status docker --all --full --no-pager                                                                                        │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat docker --no-pager                                                                                                        │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/docker/daemon.json                                                                                                            │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo docker system info                                                                                                                     │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl status cri-docker --all --full --no-pager                                                                                    │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat cri-docker --no-pager                                                                                                    │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                               │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                         │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cri-dockerd --version                                                                                                                  │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status containerd --all --full --no-pager                                                                                    │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl cat containerd --no-pager                                                                                                    │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /lib/systemd/system/containerd.service                                                                                             │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/containerd/config.toml                                                                                                        │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo containerd config dump                                                                                                                 │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status crio --all --full --no-pager                                                                                          │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat crio --no-pager                                                                                                          │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo crio config                                                                                                                            │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ delete  │ -p bridge-212776                                                                                                                                             │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ start   │ -p embed-certs-168452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ embed-certs-168452     │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-691094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                 │ old-k8s-version-691094 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ stop    │ -p old-k8s-version-691094 --alsologtostderr -v=3                                                                                                             │ old-k8s-version-691094 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:33:19
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:33:19.818158  315363 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:33:19.818478  315363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:33:19.818490  315363 out.go:374] Setting ErrFile to fd 2...
	I1119 02:33:19.818495  315363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:33:19.818721  315363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:33:19.819330  315363 out.go:368] Setting JSON to false
	I1119 02:33:19.820616  315363 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4540,"bootTime":1763515060,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:33:19.820746  315363 start.go:143] virtualization: kvm guest
	I1119 02:33:19.822862  315363 out.go:179] * [embed-certs-168452] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:33:19.824498  315363 notify.go:221] Checking for updates...
	I1119 02:33:19.825083  315363 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:33:19.827189  315363 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:33:19.828628  315363 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:33:19.830282  315363 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	I1119 02:33:19.832156  315363 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:33:19.833558  315363 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:33:19.835289  315363 config.go:182] Loaded profile config "kubernetes-upgrade-896338": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:19.835456  315363 config.go:182] Loaded profile config "no-preload-483142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:19.835531  315363 config.go:182] Loaded profile config "old-k8s-version-691094": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 02:33:19.835628  315363 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:33:19.869670  315363 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:33:19.869754  315363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:33:19.948056  315363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-19 02:33:19.935291829 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:33:19.948230  315363 docker.go:319] overlay module found
	I1119 02:33:19.949713  315363 out.go:179] * Using the docker driver based on user configuration
	I1119 02:33:19.290831  301934 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:19.290855  301934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:33:19.290915  301934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:33:19.311399  301934 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:19.311423  301934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:33:19.311589  301934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:33:19.329209  301934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:33:19.348646  301934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:33:19.386878  301934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:33:19.430928  301934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:19.450594  301934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:19.476197  301934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:19.710133  301934 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1119 02:33:19.711417  301934 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-691094" to be "Ready" ...
	I1119 02:33:19.994360  301934 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:33:19.950788  315363 start.go:309] selected driver: docker
	I1119 02:33:19.950820  315363 start.go:930] validating driver "docker" against <nil>
	I1119 02:33:19.950835  315363 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:33:19.951688  315363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:33:20.027806  315363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-19 02:33:20.015781927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:33:20.028020  315363 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 02:33:20.028315  315363 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:33:20.030421  315363 out.go:179] * Using Docker driver with root privileges
	I1119 02:33:20.031895  315363 cni.go:84] Creating CNI manager for ""
	I1119 02:33:20.031986  315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:20.031997  315363 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:33:20.032066  315363 start.go:353] cluster config:
	{Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:33:20.034765  315363 out.go:179] * Starting "embed-certs-168452" primary control-plane node in "embed-certs-168452" cluster
	I1119 02:33:20.037487  315363 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 02:33:20.039029  315363 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:33:20.040485  315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:33:20.040520  315363 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1119 02:33:20.040528  315363 cache.go:65] Caching tarball of preloaded images
	I1119 02:33:20.040583  315363 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:33:20.040607  315363 preload.go:238] Found /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1119 02:33:20.040616  315363 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 02:33:20.040718  315363 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json ...
	I1119 02:33:20.040739  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json: {Name:mk2c1cb92572f9f7372f9d895b2c58b49c99bb3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:20.063579  315363 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:33:20.063610  315363 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:33:20.063636  315363 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:33:20.063670  315363 start.go:360] acquireMachinesLock for embed-certs-168452: {Name:mk4860299f8ff219c79992500844e49d455bd43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:33:20.063790  315363 start.go:364] duration metric: took 102.461µs to acquireMachinesLock for "embed-certs-168452"
	I1119 02:33:20.063835  315363 start.go:93] Provisioning new machine with config: &{Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:33:20.063944  315363 start.go:125] createHost starting for "" (driver="docker")
	I1119 02:33:19.995882  301934 addons.go:515] duration metric: took 741.418352ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:33:20.065989  315363 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 02:33:20.066193  315363 start.go:159] libmachine.API.Create for "embed-certs-168452" (driver="docker")
	I1119 02:33:20.066226  315363 client.go:173] LocalClient.Create starting
	I1119 02:33:20.066306  315363 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem
	I1119 02:33:20.066338  315363 main.go:143] libmachine: Decoding PEM data...
	I1119 02:33:20.066360  315363 main.go:143] libmachine: Parsing certificate...
	I1119 02:33:20.066438  315363 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem
	I1119 02:33:20.066464  315363 main.go:143] libmachine: Decoding PEM data...
	I1119 02:33:20.066475  315363 main.go:143] libmachine: Parsing certificate...
	I1119 02:33:20.066835  315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 02:33:20.087889  315363 cli_runner.go:211] docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 02:33:20.087987  315363 network_create.go:284] running [docker network inspect embed-certs-168452] to gather additional debugging logs...
	I1119 02:33:20.088020  315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452
	W1119 02:33:20.108512  315363 cli_runner.go:211] docker network inspect embed-certs-168452 returned with exit code 1
	I1119 02:33:20.108553  315363 network_create.go:287] error running [docker network inspect embed-certs-168452]: docker network inspect embed-certs-168452: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-168452 not found
	I1119 02:33:20.108577  315363 network_create.go:289] output of [docker network inspect embed-certs-168452]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-168452 not found
	
	** /stderr **
	I1119 02:33:20.108677  315363 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:33:20.129985  315363 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ed39016f2aa9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:16:a0:62:5a:51} reservation:<nil>}
	I1119 02:33:20.130640  315363 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-42b0c19d513b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b2:bf:ca:ce:21:95} reservation:<nil>}
	I1119 02:33:20.131454  315363 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-002e39e6dc05 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:8e:34:24:50:a5} reservation:<nil>}
	I1119 02:33:20.132210  315363 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c1155ea75a94 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:76:37:ad:5a:d8:36} reservation:<nil>}
	I1119 02:33:20.133253  315363 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-3ec6f45a7001 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:12:9a:69:49:8b:1f} reservation:<nil>}
	I1119 02:33:20.134343  315363 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ddf580}
	I1119 02:33:20.134393  315363 network_create.go:124] attempt to create docker network embed-certs-168452 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1119 02:33:20.134459  315363 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-168452 embed-certs-168452
	I1119 02:33:20.192566  315363 network_create.go:108] docker network embed-certs-168452 192.168.94.0/24 created
	I1119 02:33:20.192597  315363 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-168452" container
	I1119 02:33:20.192665  315363 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 02:33:20.216991  315363 cli_runner.go:164] Run: docker volume create embed-certs-168452 --label name.minikube.sigs.k8s.io=embed-certs-168452 --label created_by.minikube.sigs.k8s.io=true
	I1119 02:33:20.240868  315363 oci.go:103] Successfully created a docker volume embed-certs-168452
	I1119 02:33:20.240948  315363 cli_runner.go:164] Run: docker run --rm --name embed-certs-168452-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-168452 --entrypoint /usr/bin/test -v embed-certs-168452:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 02:33:20.653772  315363 oci.go:107] Successfully prepared a docker volume embed-certs-168452
	I1119 02:33:20.653851  315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:33:20.653866  315363 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 02:33:20.653963  315363 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-168452:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 02:33:20.215687  301934 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-691094" context rescaled to 1 replicas
	W1119 02:33:21.715210  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	W1119 02:33:24.323644  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	I1119 02:33:28.147893  307222 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:33:28.147982  307222 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:33:28.148104  307222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:33:28.148201  307222 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:33:28.148256  307222 kubeadm.go:319] OS: Linux
	I1119 02:33:28.148332  307222 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:33:28.148450  307222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:33:28.148522  307222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:33:28.148596  307222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:33:28.148672  307222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:33:28.148756  307222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:33:28.148841  307222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:33:28.148915  307222 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:33:28.149019  307222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:33:28.149159  307222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:33:28.149311  307222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:33:28.149421  307222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:33:28.151537  307222 out.go:252]   - Generating certificates and keys ...
	I1119 02:33:28.151647  307222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:33:28.151774  307222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:33:28.151834  307222 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:33:28.151902  307222 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:33:28.152000  307222 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:33:28.152068  307222 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:33:28.152179  307222 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:33:28.152343  307222 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-483142] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:33:28.152451  307222 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:33:28.152598  307222 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-483142] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:33:28.152690  307222 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:33:28.152796  307222 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:33:28.152837  307222 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:33:28.152894  307222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:33:28.152945  307222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:33:28.153002  307222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:33:28.153051  307222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:33:28.153118  307222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:33:28.153171  307222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:33:28.153255  307222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:33:28.153358  307222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:33:28.154609  307222 out.go:252]   - Booting up control plane ...
	I1119 02:33:28.154709  307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:33:28.154821  307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:33:28.154904  307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:33:28.155033  307222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:33:28.155173  307222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:33:28.155323  307222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:33:28.155456  307222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:33:28.155501  307222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:33:28.155631  307222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:33:28.155728  307222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:33:28.155805  307222 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001464049s
	I1119 02:33:28.155906  307222 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:33:28.156017  307222 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 02:33:28.156135  307222 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:33:28.156242  307222 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:33:28.156335  307222 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.319882231s
	I1119 02:33:28.156456  307222 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.432703999s
	I1119 02:33:28.156560  307222 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001475545s
	I1119 02:33:28.156685  307222 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:33:28.156832  307222 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:33:28.156917  307222 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:33:28.157202  307222 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-483142 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:33:28.157272  307222 kubeadm.go:319] [bootstrap-token] Using token: nwrx92.0c942uuundzydmcz
	I1119 02:33:28.159046  307222 out.go:252]   - Configuring RBAC rules ...
	I1119 02:33:28.159207  307222 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:33:28.159328  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:33:28.159549  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:33:28.159720  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:33:28.159922  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:33:28.160077  307222 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:33:28.160254  307222 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:33:28.160329  307222 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:33:28.160427  307222 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:33:28.160443  307222 kubeadm.go:319] 
	I1119 02:33:28.160527  307222 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:33:28.160536  307222 kubeadm.go:319] 
	I1119 02:33:28.160603  307222 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:33:28.160610  307222 kubeadm.go:319] 
	I1119 02:33:28.160642  307222 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:33:28.160730  307222 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:33:28.160832  307222 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:33:28.160845  307222 kubeadm.go:319] 
	I1119 02:33:28.160922  307222 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:33:28.160942  307222 kubeadm.go:319] 
	I1119 02:33:28.161016  307222 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:33:28.161031  307222 kubeadm.go:319] 
	I1119 02:33:28.161114  307222 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:33:28.161229  307222 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:33:28.161347  307222 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:33:28.161359  307222 kubeadm.go:319] 
	I1119 02:33:28.161531  307222 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:33:28.161656  307222 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:33:28.161665  307222 kubeadm.go:319] 
	I1119 02:33:28.161797  307222 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token nwrx92.0c942uuundzydmcz \
	I1119 02:33:28.161968  307222 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a \
	I1119 02:33:28.162022  307222 kubeadm.go:319] 	--control-plane 
	I1119 02:33:28.162036  307222 kubeadm.go:319] 
	I1119 02:33:28.162163  307222 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:33:28.162174  307222 kubeadm.go:319] 
	I1119 02:33:28.162301  307222 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token nwrx92.0c942uuundzydmcz \
	I1119 02:33:28.162456  307222 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a 
	I1119 02:33:28.162469  307222 cni.go:84] Creating CNI manager for ""
	I1119 02:33:28.162475  307222 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:28.164382  307222 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:33:25.786283  315363 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-168452:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.132274902s)
	I1119 02:33:25.786322  315363 kic.go:203] duration metric: took 5.132452147s to extract preloaded images to volume ...
	W1119 02:33:25.786460  315363 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 02:33:25.786504  315363 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 02:33:25.786554  315363 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:33:25.853413  315363 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-168452 --name embed-certs-168452 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-168452 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-168452 --network embed-certs-168452 --ip 192.168.94.2 --volume embed-certs-168452:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:33:26.238651  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Running}}
	I1119 02:33:26.261169  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:26.284313  315363 cli_runner.go:164] Run: docker exec embed-certs-168452 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:33:26.336955  315363 oci.go:144] the created container "embed-certs-168452" has a running status.
	I1119 02:33:26.336985  315363 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa...
	I1119 02:33:26.484310  315363 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:33:26.517116  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:26.542901  315363 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:33:26.542925  315363 kic_runner.go:114] Args: [docker exec --privileged embed-certs-168452 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:33:26.595205  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:26.623359  315363 machine.go:94] provisionDockerMachine start ...
	I1119 02:33:26.623527  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:26.646254  315363 main.go:143] libmachine: Using SSH client type: native
	I1119 02:33:26.646550  315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1119 02:33:26.646569  315363 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:33:26.799221  315363 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-168452
	
	I1119 02:33:26.799250  315363 ubuntu.go:182] provisioning hostname "embed-certs-168452"
	I1119 02:33:26.799334  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:26.820929  315363 main.go:143] libmachine: Using SSH client type: native
	I1119 02:33:26.821188  315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1119 02:33:26.821210  315363 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-168452 && echo "embed-certs-168452" | sudo tee /etc/hostname
	I1119 02:33:26.966035  315363 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-168452
	
	I1119 02:33:26.966125  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:26.985276  315363 main.go:143] libmachine: Using SSH client type: native
	I1119 02:33:26.985598  315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1119 02:33:26.985633  315363 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-168452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-168452/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-168452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:33:27.121670  315363 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:33:27.121703  315363 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11107/.minikube}
	I1119 02:33:27.121727  315363 ubuntu.go:190] setting up certificates
	I1119 02:33:27.123000  315363 provision.go:84] configureAuth start
	I1119 02:33:27.123195  315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
	I1119 02:33:27.143490  315363 provision.go:143] copyHostCerts
	I1119 02:33:27.143570  315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem, removing ...
	I1119 02:33:27.143580  315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem
	I1119 02:33:27.143645  315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem (1082 bytes)
	I1119 02:33:27.143736  315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem, removing ...
	I1119 02:33:27.143744  315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem
	I1119 02:33:27.143773  315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem (1123 bytes)
	I1119 02:33:27.143829  315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem, removing ...
	I1119 02:33:27.143835  315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem
	I1119 02:33:27.143858  315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem (1675 bytes)
	I1119 02:33:27.143923  315363 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem org=jenkins.embed-certs-168452 san=[127.0.0.1 192.168.94.2 embed-certs-168452 localhost minikube]
	I1119 02:33:27.239080  315363 provision.go:177] copyRemoteCerts
	I1119 02:33:27.239165  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:33:27.239198  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.262397  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.362967  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:33:27.387666  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1119 02:33:27.418735  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:33:27.446098  315363 provision.go:87] duration metric: took 323.082791ms to configureAuth
	I1119 02:33:27.446129  315363 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:33:27.446313  315363 config.go:182] Loaded profile config "embed-certs-168452": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:27.446327  315363 machine.go:97] duration metric: took 822.891862ms to provisionDockerMachine
	I1119 02:33:27.446333  315363 client.go:176] duration metric: took 7.38010166s to LocalClient.Create
	I1119 02:33:27.446351  315363 start.go:167] duration metric: took 7.380160884s to libmachine.API.Create "embed-certs-168452"
	I1119 02:33:27.446358  315363 start.go:293] postStartSetup for "embed-certs-168452" (driver="docker")
	I1119 02:33:27.446409  315363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:33:27.446465  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:33:27.446501  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.470807  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.575097  315363 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:33:27.580067  315363 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:33:27.580102  315363 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:33:27.580115  315363 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/addons for local assets ...
	I1119 02:33:27.580188  315363 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/files for local assets ...
	I1119 02:33:27.580303  315363 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem -> 146572.pem in /etc/ssl/certs
	I1119 02:33:27.580434  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:33:27.588848  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:33:27.611498  315363 start.go:296] duration metric: took 165.12815ms for postStartSetup
	I1119 02:33:27.611895  315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
	I1119 02:33:27.630987  315363 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json ...
	I1119 02:33:27.631276  315363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:33:27.631327  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.650599  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.747119  315363 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:33:27.752242  315363 start.go:128] duration metric: took 7.68828048s to createHost
	I1119 02:33:27.752270  315363 start.go:83] releasing machines lock for "embed-certs-168452", held for 7.688466151s
	I1119 02:33:27.752448  315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
	I1119 02:33:27.772595  315363 ssh_runner.go:195] Run: cat /version.json
	I1119 02:33:27.772634  315363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:33:27.772668  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.772695  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.795020  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.795311  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.889466  315363 ssh_runner.go:195] Run: systemctl --version
	I1119 02:33:27.948057  315363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:33:27.953270  315363 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:33:27.953328  315363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:33:27.979962  315363 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 02:33:27.979983  315363 start.go:496] detecting cgroup driver to use...
	I1119 02:33:27.980013  315363 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:33:27.980050  315363 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 02:33:27.995148  315363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 02:33:28.009176  315363 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:33:28.009239  315363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:33:28.028120  315363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:33:28.047654  315363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:33:28.137742  315363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:33:28.233503  315363 docker.go:234] disabling docker service ...
	I1119 02:33:28.233569  315363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:33:28.254546  315363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:33:28.270970  315363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:33:28.372358  315363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:33:28.475816  315363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:33:28.494447  315363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:33:28.514112  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 02:33:28.528713  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 02:33:28.542307  315363 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 02:33:28.542395  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 02:33:28.553682  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:33:28.564425  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 02:33:28.574563  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:33:28.585047  315363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:33:28.594876  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 02:33:28.606066  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 02:33:28.616549  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 02:33:28.627283  315363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:33:28.635846  315363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:33:28.643854  315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:28.727138  315363 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 02:33:28.825075  315363 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 02:33:28.825141  315363 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 02:33:28.829886  315363 start.go:564] Will wait 60s for crictl version
	I1119 02:33:28.829954  315363 ssh_runner.go:195] Run: which crictl
	I1119 02:33:28.834062  315363 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:33:28.859386  315363 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 02:33:28.859454  315363 ssh_runner.go:195] Run: containerd --version
	I1119 02:33:28.881932  315363 ssh_runner.go:195] Run: containerd --version
	I1119 02:33:28.905418  315363 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 02:33:28.906851  315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:33:28.925576  315363 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1119 02:33:28.930043  315363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:33:28.941472  315363 kubeadm.go:884] updating cluster {Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:33:28.941570  315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:33:28.941633  315363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:33:28.969084  315363 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:33:28.969102  315363 containerd.go:534] Images already preloaded, skipping extraction
	I1119 02:33:28.969159  315363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:33:28.994529  315363 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:33:28.994549  315363 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:33:28.994556  315363 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
	I1119 02:33:28.994637  315363 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-168452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:33:28.994694  315363 ssh_runner.go:195] Run: sudo crictl info
	I1119 02:33:29.023174  315363 cni.go:84] Creating CNI manager for ""
	I1119 02:33:29.023197  315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:29.023211  315363 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:33:29.023232  315363 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-168452 NodeName:embed-certs-168452 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:33:29.023337  315363 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-168452"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:33:29.023423  315363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:33:29.032358  315363 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:33:29.032438  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:33:29.041206  315363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1119 02:33:29.056159  315363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:33:29.074583  315363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1119 02:33:29.089316  315363 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:33:29.093854  315363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:33:29.106602  315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:29.193818  315363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:29.220027  315363 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452 for IP: 192.168.94.2
	I1119 02:33:29.220053  315363 certs.go:195] generating shared ca certs ...
	I1119 02:33:29.220075  315363 certs.go:227] acquiring lock for ca certs: {Name:mk11d6789b2333e17b3937495b501fbcca15c242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.220231  315363 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key
	I1119 02:33:29.220278  315363 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key
	I1119 02:33:29.220287  315363 certs.go:257] generating profile certs ...
	I1119 02:33:29.220334  315363 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key
	I1119 02:33:29.220351  315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt with IP's: []
	I1119 02:33:29.496773  315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt ...
	I1119 02:33:29.496800  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt: {Name:mkdb5e24f9c8b0d3d9849ba91ac24e28be0abdf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.496993  315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key ...
	I1119 02:33:29.497006  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key: {Name:mk5aa88fe9180cc5f94c07d5a968428b4ccf37cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.497088  315363 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2
	I1119 02:33:29.497102  315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	W1119 02:33:26.721525  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	W1119 02:33:29.215940  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	I1119 02:33:28.165835  307222 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:33:28.176028  307222 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:33:28.176052  307222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:33:28.195615  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:33:28.450816  307222 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:33:28.450899  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:28.450933  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-483142 minikube.k8s.io/updated_at=2025_11_19T02_33_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=no-preload-483142 minikube.k8s.io/primary=true
	I1119 02:33:28.538275  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:28.538445  307222 ops.go:34] apiserver oom_adj: -16
	I1119 02:33:29.038968  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:29.539224  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:30.038530  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:30.539271  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:31.038434  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:31.538496  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:32.038945  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:32.539001  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:33.038571  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:33.129034  307222 kubeadm.go:1114] duration metric: took 4.678195875s to wait for elevateKubeSystemPrivileges
	I1119 02:33:33.129095  307222 kubeadm.go:403] duration metric: took 17.40558167s to StartCluster
	I1119 02:33:33.129119  307222 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:33.129202  307222 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:33:33.131182  307222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:33.131481  307222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:33:33.131519  307222 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:33:33.131585  307222 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:33:33.131706  307222 addons.go:70] Setting storage-provisioner=true in profile "no-preload-483142"
	I1119 02:33:33.131748  307222 config.go:182] Loaded profile config "no-preload-483142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:33.131794  307222 addons.go:70] Setting default-storageclass=true in profile "no-preload-483142"
	I1119 02:33:33.131827  307222 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-483142"
	I1119 02:33:33.131810  307222 addons.go:239] Setting addon storage-provisioner=true in "no-preload-483142"
	I1119 02:33:33.131959  307222 host.go:66] Checking if "no-preload-483142" exists ...
	I1119 02:33:33.132200  307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
	I1119 02:33:33.132480  307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
	I1119 02:33:33.134152  307222 out.go:179] * Verifying Kubernetes components...
	I1119 02:33:33.135585  307222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:33.159834  307222 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:33:33.160479  307222 addons.go:239] Setting addon default-storageclass=true in "no-preload-483142"
	I1119 02:33:33.160545  307222 host.go:66] Checking if "no-preload-483142" exists ...
	I1119 02:33:33.161017  307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
	I1119 02:33:33.161390  307222 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:33.161410  307222 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:33:33.161458  307222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-483142
	I1119 02:33:33.198354  307222 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:33.198390  307222 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:33:33.198448  307222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-483142
	I1119 02:33:33.198522  307222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/no-preload-483142/id_rsa Username:docker}
	I1119 02:33:33.223657  307222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/no-preload-483142/id_rsa Username:docker}
	I1119 02:33:33.248952  307222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:33:33.322673  307222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:33.348662  307222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:33.354901  307222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:33.503051  307222 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 02:33:33.504327  307222 node_ready.go:35] waiting up to 6m0s for node "no-preload-483142" to be "Ready" ...
	I1119 02:33:33.756829  307222 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:33:29.844643  315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 ...
	I1119 02:33:29.844667  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2: {Name:mk1596cf7137a998e277abf94c4c839907009a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.844872  315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2 ...
	I1119 02:33:29.844901  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2: {Name:mk9d817ab63555ebb02e0590916ce23352cf008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.845022  315363 certs.go:382] copying /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 -> /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt
	I1119 02:33:29.845144  315363 certs.go:386] copying /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2 -> /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key
	I1119 02:33:29.845239  315363 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key
	I1119 02:33:29.845260  315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt with IP's: []
	I1119 02:33:30.013529  315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt ...
	I1119 02:33:30.013564  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt: {Name:mka683634a30ab1845434f0fc49f75059694b447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:30.013775  315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key ...
	I1119 02:33:30.013796  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key: {Name:mk9e8dbde74fbcae5bb0e966570ae4f43c6f07e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:30.014054  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem (1338 bytes)
	W1119 02:33:30.014108  315363 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657_empty.pem, impossibly tiny 0 bytes
	I1119 02:33:30.014124  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 02:33:30.014183  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:33:30.014219  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:33:30.014257  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem (1675 bytes)
	I1119 02:33:30.014318  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:33:30.014986  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:33:30.034798  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 02:33:30.054155  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:33:30.074272  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:33:30.094396  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 02:33:30.114605  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:33:30.133991  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:33:30.153105  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 02:33:30.172052  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /usr/share/ca-certificates/146572.pem (1708 bytes)
	I1119 02:33:30.194139  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:33:30.212546  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem --> /usr/share/ca-certificates/14657.pem (1338 bytes)
	I1119 02:33:30.231534  315363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:33:30.246493  315363 ssh_runner.go:195] Run: openssl version
	I1119 02:33:30.252586  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146572.pem && ln -fs /usr/share/ca-certificates/146572.pem /etc/ssl/certs/146572.pem"
	I1119 02:33:30.261620  315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146572.pem
	I1119 02:33:30.265824  315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146572.pem
	I1119 02:33:30.265886  315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146572.pem
	I1119 02:33:30.301164  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146572.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:33:30.310429  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:33:30.319818  315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:33:30.323998  315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:57 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:33:30.324046  315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:33:30.360567  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:33:30.370492  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14657.pem && ln -fs /usr/share/ca-certificates/14657.pem /etc/ssl/certs/14657.pem"
	I1119 02:33:30.380695  315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14657.pem
	I1119 02:33:30.385171  315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14657.pem
	I1119 02:33:30.385241  315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14657.pem
	I1119 02:33:30.422375  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14657.pem /etc/ssl/certs/51391683.0"
	I1119 02:33:30.432329  315363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:33:30.436333  315363 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:33:30.436432  315363 kubeadm.go:401] StartCluster: {Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:33:30.436494  315363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 02:33:30.436588  315363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:33:30.465191  315363 cri.go:89] found id: ""
	I1119 02:33:30.465255  315363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:33:30.474328  315363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:33:30.483132  315363 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:33:30.483196  315363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:33:30.491249  315363 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:33:30.491272  315363 kubeadm.go:158] found existing configuration files:
	
	I1119 02:33:30.491320  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:33:30.499072  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:33:30.499140  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:33:30.507018  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:33:30.514836  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:33:30.514890  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:33:30.523396  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:33:30.532721  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:33:30.532772  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:33:30.541409  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:33:30.550090  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:33:30.550157  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:33:30.558693  315363 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:33:30.636057  315363 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:33:30.702518  315363 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1119 02:33:31.715333  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	W1119 02:33:33.715963  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	I1119 02:33:34.216972  301934 node_ready.go:49] node "old-k8s-version-691094" is "Ready"
	I1119 02:33:34.217010  301934 node_ready.go:38] duration metric: took 14.505569399s for node "old-k8s-version-691094" to be "Ready" ...
	I1119 02:33:34.217027  301934 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:33:34.217083  301934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:33:34.235995  301934 api_server.go:72] duration metric: took 14.98160502s to wait for apiserver process to appear ...
	I1119 02:33:34.236024  301934 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:33:34.236046  301934 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 02:33:34.242612  301934 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1119 02:33:34.244469  301934 api_server.go:141] control plane version: v1.28.0
	I1119 02:33:34.244501  301934 api_server.go:131] duration metric: took 8.468136ms to wait for apiserver health ...
	I1119 02:33:34.244512  301934 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:33:34.249250  301934 system_pods.go:59] 8 kube-system pods found
	I1119 02:33:34.249293  301934 system_pods.go:61] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.249301  301934 system_pods.go:61] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.249308  301934 system_pods.go:61] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.249326  301934 system_pods.go:61] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.249331  301934 system_pods.go:61] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.249336  301934 system_pods.go:61] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.249340  301934 system_pods.go:61] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.249347  301934 system_pods.go:61] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.249389  301934 system_pods.go:74] duration metric: took 4.842718ms to wait for pod list to return data ...
	I1119 02:33:34.249403  301934 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:33:34.251979  301934 default_sa.go:45] found service account: "default"
	I1119 02:33:34.252000  301934 default_sa.go:55] duration metric: took 2.59102ms for default service account to be created ...
	I1119 02:33:34.252008  301934 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:33:34.256098  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:34.256141  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.256148  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.256155  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.256158  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.256163  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.256166  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.256169  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.256173  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.256204  301934 retry.go:31] will retry after 294.08163ms: missing components: kube-dns
	I1119 02:33:34.555117  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:34.555149  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.555155  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.555160  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.555164  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.555168  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.555171  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.555174  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.555181  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.555200  301934 retry.go:31] will retry after 239.208285ms: missing components: kube-dns
	I1119 02:33:34.801314  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:34.801356  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.801397  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.801408  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.801414  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.801421  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.801426  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.801432  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.801446  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.801465  301934 retry.go:31] will retry after 406.320974ms: missing components: kube-dns
	I1119 02:33:33.758898  307222 addons.go:515] duration metric: took 627.311179ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:33:34.007122  307222 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-483142" context rescaled to 1 replicas
	W1119 02:33:35.507777  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	I1119 02:33:35.212153  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:35.212193  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:35.212202  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:35.212208  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:35.212214  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:35.212221  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:35.212226  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:35.212230  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:35.212235  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Running
	I1119 02:33:35.212252  301934 retry.go:31] will retry after 502.533324ms: missing components: kube-dns
	I1119 02:33:35.719172  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:35.719211  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Running
	I1119 02:33:35.719220  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:35.719225  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:35.719231  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:35.719238  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:35.719243  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:35.719248  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:35.719254  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Running
	I1119 02:33:35.719267  301934 system_pods.go:126] duration metric: took 1.46725409s to wait for k8s-apps to be running ...
	I1119 02:33:35.719280  301934 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:33:35.719333  301934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:33:35.733944  301934 system_svc.go:56] duration metric: took 14.654804ms WaitForService to wait for kubelet
	I1119 02:33:35.733974  301934 kubeadm.go:587] duration metric: took 16.479589704s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:33:35.733994  301934 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:33:35.736881  301934 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:33:35.736904  301934 node_conditions.go:123] node cpu capacity is 8
	I1119 02:33:35.736917  301934 node_conditions.go:105] duration metric: took 2.917087ms to run NodePressure ...
	I1119 02:33:35.736947  301934 start.go:242] waiting for startup goroutines ...
	I1119 02:33:35.736956  301934 start.go:247] waiting for cluster config update ...
	I1119 02:33:35.736966  301934 start.go:256] writing updated cluster config ...
	I1119 02:33:35.737252  301934 ssh_runner.go:195] Run: rm -f paused
	I1119 02:33:35.741706  301934 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:33:35.746693  301934 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-bbvqz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.751796  301934 pod_ready.go:94] pod "coredns-5dd5756b68-bbvqz" is "Ready"
	I1119 02:33:35.751821  301934 pod_ready.go:86] duration metric: took 5.102077ms for pod "coredns-5dd5756b68-bbvqz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.754811  301934 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.759826  301934 pod_ready.go:94] pod "etcd-old-k8s-version-691094" is "Ready"
	I1119 02:33:35.759852  301934 pod_ready.go:86] duration metric: took 5.017899ms for pod "etcd-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.763701  301934 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.768670  301934 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-691094" is "Ready"
	I1119 02:33:35.768693  301934 pod_ready.go:86] duration metric: took 4.969901ms for pod "kube-apiserver-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.772227  301934 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.146684  301934 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-691094" is "Ready"
	I1119 02:33:36.146718  301934 pod_ready.go:86] duration metric: took 374.468133ms for pod "kube-controller-manager-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.347472  301934 pod_ready.go:83] waiting for pod "kube-proxy-79df5" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.746791  301934 pod_ready.go:94] pod "kube-proxy-79df5" is "Ready"
	I1119 02:33:36.746855  301934 pod_ready.go:86] duration metric: took 399.347819ms for pod "kube-proxy-79df5" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.946961  301934 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:37.347059  301934 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-691094" is "Ready"
	I1119 02:33:37.347090  301934 pod_ready.go:86] duration metric: took 400.10454ms for pod "kube-scheduler-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:37.347108  301934 pod_ready.go:40] duration metric: took 1.605370699s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:33:37.406793  301934 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1119 02:33:37.408685  301934 out.go:203] 
	W1119 02:33:37.410052  301934 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 02:33:37.411691  301934 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 02:33:37.413481  301934 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-691094" cluster and "default" namespace by default
	W1119 02:33:37.511440  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	W1119 02:33:40.007282  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	I1119 02:33:42.519187  315363 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:33:42.519270  315363 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:33:42.519471  315363 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:33:42.519558  315363 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:33:42.519641  315363 kubeadm.go:319] OS: Linux
	I1119 02:33:42.519723  315363 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:33:42.519793  315363 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:33:42.519863  315363 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:33:42.519937  315363 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:33:42.520011  315363 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:33:42.520082  315363 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:33:42.520161  315363 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:33:42.520246  315363 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:33:42.520396  315363 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:33:42.520528  315363 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:33:42.520640  315363 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:33:42.520739  315363 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:33:42.522619  315363 out.go:252]   - Generating certificates and keys ...
	I1119 02:33:42.522717  315363 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:33:42.522778  315363 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:33:42.522841  315363 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:33:42.522898  315363 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:33:42.522948  315363 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:33:42.522986  315363 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:33:42.523065  315363 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:33:42.523231  315363 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-168452 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1119 02:33:42.523301  315363 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:33:42.523451  315363 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-168452 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1119 02:33:42.523527  315363 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:33:42.523599  315363 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:33:42.523658  315363 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:33:42.523737  315363 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:33:42.523787  315363 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:33:42.523833  315363 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:33:42.523879  315363 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:33:42.523945  315363 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:33:42.524004  315363 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:33:42.524082  315363 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:33:42.524137  315363 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:33:42.525751  315363 out.go:252]   - Booting up control plane ...
	I1119 02:33:42.525831  315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:33:42.525893  315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:33:42.525997  315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:33:42.526121  315363 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:33:42.526235  315363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:33:42.526323  315363 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:33:42.526401  315363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:33:42.526441  315363 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:33:42.526546  315363 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:33:42.526633  315363 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:33:42.526684  315363 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001668097s
	I1119 02:33:42.526759  315363 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:33:42.526828  315363 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1119 02:33:42.526912  315363 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:33:42.526979  315363 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:33:42.527060  315363 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.143588684s
	I1119 02:33:42.527116  315363 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.751163591s
	I1119 02:33:42.527185  315363 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002351229s
	I1119 02:33:42.527279  315363 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:33:42.527418  315363 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:33:42.527475  315363 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:33:42.527642  315363 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-168452 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:33:42.527698  315363 kubeadm.go:319] [bootstrap-token] Using token: f9q4qi.t8dfm2zfbs2z2sgs
	I1119 02:33:42.529100  315363 out.go:252]   - Configuring RBAC rules ...
	I1119 02:33:42.529232  315363 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:33:42.529348  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:33:42.529576  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:33:42.529779  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:33:42.529949  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:33:42.530070  315363 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:33:42.530217  315363 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:33:42.530321  315363 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:33:42.530403  315363 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:33:42.530413  315363 kubeadm.go:319] 
	I1119 02:33:42.530492  315363 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:33:42.530502  315363 kubeadm.go:319] 
	I1119 02:33:42.530604  315363 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:33:42.530618  315363 kubeadm.go:319] 
	I1119 02:33:42.530647  315363 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:33:42.530726  315363 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:33:42.530797  315363 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:33:42.530809  315363 kubeadm.go:319] 
	I1119 02:33:42.530880  315363 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:33:42.530885  315363 kubeadm.go:319] 
	I1119 02:33:42.530954  315363 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:33:42.530981  315363 kubeadm.go:319] 
	I1119 02:33:42.531052  315363 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:33:42.531164  315363 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:33:42.531261  315363 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:33:42.531271  315363 kubeadm.go:319] 
	I1119 02:33:42.531424  315363 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:33:42.531551  315363 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:33:42.531570  315363 kubeadm.go:319] 
	I1119 02:33:42.531690  315363 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token f9q4qi.t8dfm2zfbs2z2sgs \
	I1119 02:33:42.531850  315363 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a \
	I1119 02:33:42.531878  315363 kubeadm.go:319] 	--control-plane 
	I1119 02:33:42.531885  315363 kubeadm.go:319] 
	I1119 02:33:42.531966  315363 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:33:42.531972  315363 kubeadm.go:319] 
	I1119 02:33:42.532046  315363 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f9q4qi.t8dfm2zfbs2z2sgs \
	I1119 02:33:42.532149  315363 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a 
	I1119 02:33:42.532161  315363 cni.go:84] Creating CNI manager for ""
	I1119 02:33:42.532167  315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:42.535194  315363 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:33:42.536650  315363 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:33:42.541710  315363 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:33:42.541734  315363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:33:42.556040  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:33:42.817018  315363 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:33:42.817147  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:42.817217  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-168452 minikube.k8s.io/updated_at=2025_11_19T02_33_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=embed-certs-168452 minikube.k8s.io/primary=true
	I1119 02:33:42.828812  315363 ops.go:34] apiserver oom_adj: -16
	I1119 02:33:42.896633  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:43.396810  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:43.896801  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:44.397677  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1119 02:33:46.450455  208368 system_pods.go:55] pod list returned error: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
	I1119 02:33:46.452233  208368 out.go:203] 
	W1119 02:33:46.453522  208368 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for system pods: apiserver never returned a pod list
	W1119 02:33:46.453544  208368 out.go:285] * 
	W1119 02:33:46.455831  208368 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:33:46.457044  208368 out.go:203] 
	W1119 02:33:42.007484  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	W1119 02:33:44.007813  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	W1119 02:33:46.008192  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	I1119 02:33:44.897377  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:45.397137  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:45.897616  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:46.397448  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:46.896710  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:47.397632  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:47.897150  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:48.003028  315363 kubeadm.go:1114] duration metric: took 5.18596901s to wait for elevateKubeSystemPrivileges
	I1119 02:33:48.003056  315363 kubeadm.go:403] duration metric: took 17.566632128s to StartCluster
	I1119 02:33:48.003071  315363 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:48.003125  315363 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:33:48.005668  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:48.005964  315363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:33:48.005984  315363 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:33:48.006098  315363 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:33:48.006191  315363 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-168452"
	I1119 02:33:48.006211  315363 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-168452"
	I1119 02:33:48.006209  315363 addons.go:70] Setting default-storageclass=true in profile "embed-certs-168452"
	I1119 02:33:48.006218  315363 config.go:182] Loaded profile config "embed-certs-168452": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:48.006231  315363 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-168452"
	I1119 02:33:48.006249  315363 host.go:66] Checking if "embed-certs-168452" exists ...
	I1119 02:33:48.006692  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:48.006819  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:48.007901  315363 out.go:179] * Verifying Kubernetes components...
	I1119 02:33:48.009142  315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:48.032568  315363 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:33:48.032594  315363 addons.go:239] Setting addon default-storageclass=true in "embed-certs-168452"
	I1119 02:33:48.032649  315363 host.go:66] Checking if "embed-certs-168452" exists ...
	I1119 02:33:48.033140  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:48.034177  315363 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:48.034248  315363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:33:48.034332  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:48.063775  315363 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:48.063802  315363 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:33:48.063864  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:48.067763  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:48.088481  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:48.118977  315363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:33:48.181811  315363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:48.192106  315363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:48.217510  315363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:48.350174  315363 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1119 02:33:48.351838  315363 node_ready.go:35] waiting up to 6m0s for node "embed-certs-168452" to be "Ready" ...
	I1119 02:33:48.575859  315363 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:33:48.577031  315363 addons.go:515] duration metric: took 570.934719ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:33:48.855157  315363 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-168452" context rescaled to 1 replicas
	I1119 02:33:47.507132  307222 node_ready.go:49] node "no-preload-483142" is "Ready"
	I1119 02:33:47.507166  307222 node_ready.go:38] duration metric: took 14.002781703s for node "no-preload-483142" to be "Ready" ...
	I1119 02:33:47.507196  307222 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:33:47.507253  307222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:33:47.522586  307222 api_server.go:72] duration metric: took 14.39103106s to wait for apiserver process to appear ...
	I1119 02:33:47.522619  307222 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:33:47.522641  307222 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:33:47.526803  307222 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 02:33:47.527974  307222 api_server.go:141] control plane version: v1.34.1
	I1119 02:33:47.528002  307222 api_server.go:131] duration metric: took 5.376603ms to wait for apiserver health ...
	I1119 02:33:47.528022  307222 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:33:47.531978  307222 system_pods.go:59] 8 kube-system pods found
	I1119 02:33:47.532021  307222 system_pods.go:61] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:47.532030  307222 system_pods.go:61] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:47.532039  307222 system_pods.go:61] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:47.532046  307222 system_pods.go:61] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:47.532053  307222 system_pods.go:61] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:47.532059  307222 system_pods.go:61] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:47.532066  307222 system_pods.go:61] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:47.532078  307222 system_pods.go:61] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:47.532088  307222 system_pods.go:74] duration metric: took 4.058015ms to wait for pod list to return data ...
	I1119 02:33:47.532104  307222 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:33:47.535565  307222 default_sa.go:45] found service account: "default"
	I1119 02:33:47.535586  307222 default_sa.go:55] duration metric: took 3.475549ms for default service account to be created ...
	I1119 02:33:47.535596  307222 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:33:47.539134  307222 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:47.539173  307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:47.539181  307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:47.539188  307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:47.539192  307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:47.539196  307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:47.539204  307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:47.539210  307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:47.539215  307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:47.539249  307222 retry.go:31] will retry after 294.264342ms: missing components: kube-dns
	I1119 02:33:47.840195  307222 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:47.840235  307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:47.840244  307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:47.840253  307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:47.840257  307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:47.840262  307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:47.840267  307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:47.840272  307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:47.840288  307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:47.840308  307222 retry.go:31] will retry after 249.747879ms: missing components: kube-dns
	I1119 02:33:48.097280  307222 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:48.097316  307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:48.097322  307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:48.097331  307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:48.097336  307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:48.097342  307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:48.097346  307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:48.097350  307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:48.097356  307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:48.097389  307222 retry.go:31] will retry after 312.943754ms: missing components: kube-dns
	I1119 02:33:48.416167  307222 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:48.416224  307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:48.416233  307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:48.416242  307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:48.416249  307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:48.416265  307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:48.416285  307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:48.416290  307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:48.416304  307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:48.416338  307222 retry.go:31] will retry after 380.92269ms: missing components: kube-dns
	I1119 02:33:48.802673  307222 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:48.802712  307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Running
	I1119 02:33:48.802721  307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:48.802726  307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:48.802731  307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:48.802737  307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:48.802742  307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:48.802755  307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:48.802764  307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Running
	I1119 02:33:48.802775  307222 system_pods.go:126] duration metric: took 1.26717246s to wait for k8s-apps to be running ...
	I1119 02:33:48.802788  307222 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:33:48.802838  307222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:33:48.819234  307222 system_svc.go:56] duration metric: took 16.435872ms WaitForService to wait for kubelet
	I1119 02:33:48.819260  307222 kubeadm.go:587] duration metric: took 15.68771243s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:33:48.819276  307222 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:33:48.823861  307222 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:33:48.823901  307222 node_conditions.go:123] node cpu capacity is 8
	I1119 02:33:48.823924  307222 node_conditions.go:105] duration metric: took 4.642889ms to run NodePressure ...
	I1119 02:33:48.823938  307222 start.go:242] waiting for startup goroutines ...
	I1119 02:33:48.823947  307222 start.go:247] waiting for cluster config update ...
	I1119 02:33:48.823960  307222 start.go:256] writing updated cluster config ...
	I1119 02:33:48.824308  307222 ssh_runner.go:195] Run: rm -f paused
	I1119 02:33:48.829946  307222 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:33:48.834766  307222 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zgfk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.839922  307222 pod_ready.go:94] pod "coredns-66bc5c9577-zgfk9" is "Ready"
	I1119 02:33:48.839950  307222 pod_ready.go:86] duration metric: took 5.154322ms for pod "coredns-66bc5c9577-zgfk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.842702  307222 pod_ready.go:83] waiting for pod "etcd-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.848818  307222 pod_ready.go:94] pod "etcd-no-preload-483142" is "Ready"
	I1119 02:33:48.848850  307222 pod_ready.go:86] duration metric: took 6.115348ms for pod "etcd-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.851685  307222 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.856283  307222 pod_ready.go:94] pod "kube-apiserver-no-preload-483142" is "Ready"
	I1119 02:33:48.856303  307222 pod_ready.go:86] duration metric: took 4.595808ms for pod "kube-apiserver-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.858418  307222 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:49.235039  307222 pod_ready.go:94] pod "kube-controller-manager-no-preload-483142" is "Ready"
	I1119 02:33:49.235070  307222 pod_ready.go:86] duration metric: took 376.631643ms for pod "kube-controller-manager-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:49.435524  307222 pod_ready.go:83] waiting for pod "kube-proxy-xhrdt" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:49.834741  307222 pod_ready.go:94] pod "kube-proxy-xhrdt" is "Ready"
	I1119 02:33:49.834767  307222 pod_ready.go:86] duration metric: took 399.219221ms for pod "kube-proxy-xhrdt" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:50.035303  307222 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:50.434632  307222 pod_ready.go:94] pod "kube-scheduler-no-preload-483142" is "Ready"
	I1119 02:33:50.434662  307222 pod_ready.go:86] duration metric: took 399.329431ms for pod "kube-scheduler-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:50.434673  307222 pod_ready.go:40] duration metric: took 1.604675519s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:33:50.483179  307222 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:33:50.485257  307222 out.go:179] * Done! kubectl is now configured to use "no-preload-483142" cluster and "default" namespace by default
	W1119 02:33:50.355270  315363 node_ready.go:57] node "embed-certs-168452" has "Ready":"False" status (will retry)
	W1119 02:33:52.857401  315363 node_ready.go:57] node "embed-certs-168452" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	be0d0f1147393       56cc512116c8f       6 seconds ago       Running             busybox                   0                   ee84bdf33f72f       busybox                                     default
	0b9c87419f31d       52546a367cc9e       11 seconds ago      Running             coredns                   0                   cc0a38a1bc6e5       coredns-66bc5c9577-zgfk9                    kube-system
	535511cf0eb8e       6e38f40d628db       11 seconds ago      Running             storage-provisioner       0                   fb0f81d1477d7       storage-provisioner                         kube-system
	1cfb54b0c3a9c       409467f978b4a       23 seconds ago      Running             kindnet-cni               0                   320ef97e0948d       kindnet-6nr7d                               kube-system
	5b1ec14d6e4ff       fc25172553d79       26 seconds ago      Running             kube-proxy                0                   006f0b434dcee       kube-proxy-xhrdt                            kube-system
	970216c90257f       5f1f5298c888d       37 seconds ago      Running             etcd                      0                   cfda256d6d358       etcd-no-preload-483142                      kube-system
	0de9dc5d78d1d       c80c8dbafe7dd       37 seconds ago      Running             kube-controller-manager   0                   4ca2b31dc615a       kube-controller-manager-no-preload-483142   kube-system
	73f8e07d52017       7dd6aaa1717ab       37 seconds ago      Running             kube-scheduler            0                   944a754ca109f       kube-scheduler-no-preload-483142            kube-system
	25547ba51e3d7       c3994bc696102       37 seconds ago      Running             kube-apiserver            0                   2cecfd1a4e942       kube-apiserver-no-preload-483142            kube-system
	
	
	==> containerd <==
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.844384690Z" level=info msg="Container 0b9c87419f31d22d32fb5aa8dd18a375dbaf6ff804f443acb1a68acc4e869129: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.848915038Z" level=info msg="CreateContainer within sandbox \"fb0f81d1477d72b6ba303a44d7dcfe8f587da3c0a771c6a1c4b008777ff2fe2d\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"535511cf0eb8e278c6a97e248917117a13744f6d36d6b63bef86d79ddfc7c849\""
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.849791624Z" level=info msg="StartContainer for \"535511cf0eb8e278c6a97e248917117a13744f6d36d6b63bef86d79ddfc7c849\""
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.850804812Z" level=info msg="connecting to shim 535511cf0eb8e278c6a97e248917117a13744f6d36d6b63bef86d79ddfc7c849" address="unix:///run/containerd/s/80bc605af1512d0a1687772a5377c904b203ae314189198a2e7152d00a32fcbf" protocol=ttrpc version=3
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.855314195Z" level=info msg="CreateContainer within sandbox \"cc0a38a1bc6e52bfb86f9111b03705d31cb4f105133f7badf8cd0bad94df215a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b9c87419f31d22d32fb5aa8dd18a375dbaf6ff804f443acb1a68acc4e869129\""
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.855977424Z" level=info msg="StartContainer for \"0b9c87419f31d22d32fb5aa8dd18a375dbaf6ff804f443acb1a68acc4e869129\""
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.857103150Z" level=info msg="connecting to shim 0b9c87419f31d22d32fb5aa8dd18a375dbaf6ff804f443acb1a68acc4e869129" address="unix:///run/containerd/s/7937bacd3af16e82d3c89d95ab25bc4f9992a2378956bcfd467a31692c145a49" protocol=ttrpc version=3
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.921627759Z" level=info msg="StartContainer for \"535511cf0eb8e278c6a97e248917117a13744f6d36d6b63bef86d79ddfc7c849\" returns successfully"
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.937579728Z" level=info msg="StartContainer for \"0b9c87419f31d22d32fb5aa8dd18a375dbaf6ff804f443acb1a68acc4e869129\" returns successfully"
	Nov 19 02:33:50 no-preload-483142 containerd[661]: time="2025-11-19T02:33:50.961421999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:90b24763-24ed-4631-9502-e0fab55d3520,Namespace:default,Attempt:0,}"
	Nov 19 02:33:51 no-preload-483142 containerd[661]: time="2025-11-19T02:33:51.009979068Z" level=info msg="connecting to shim ee84bdf33f72f612f9e552eb8c04a1415d82721b5ba15239cdcf8cba76b203d9" address="unix:///run/containerd/s/82ad45771f8515c1c380bc8b249a10a5518622c1ec1b8d7dfd54393183832080" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 02:33:51 no-preload-483142 containerd[661]: time="2025-11-19T02:33:51.084140935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:90b24763-24ed-4631-9502-e0fab55d3520,Namespace:default,Attempt:0,} returns sandbox id \"ee84bdf33f72f612f9e552eb8c04a1415d82721b5ba15239cdcf8cba76b203d9\""
	Nov 19 02:33:51 no-preload-483142 containerd[661]: time="2025-11-19T02:33:51.086242217Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.281283895Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.282507525Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396646"
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.284243638Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.286883382Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.287409188Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.201119642s"
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.287452431Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.292115595Z" level=info msg="CreateContainer within sandbox \"ee84bdf33f72f612f9e552eb8c04a1415d82721b5ba15239cdcf8cba76b203d9\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.302018693Z" level=info msg="Container be0d0f11473938dfef6cae268048f1ff3460754238c85a91abf307dfd89833a6: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.308897778Z" level=info msg="CreateContainer within sandbox \"ee84bdf33f72f612f9e552eb8c04a1415d82721b5ba15239cdcf8cba76b203d9\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"be0d0f11473938dfef6cae268048f1ff3460754238c85a91abf307dfd89833a6\""
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.309679787Z" level=info msg="StartContainer for \"be0d0f11473938dfef6cae268048f1ff3460754238c85a91abf307dfd89833a6\""
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.310775208Z" level=info msg="connecting to shim be0d0f11473938dfef6cae268048f1ff3460754238c85a91abf307dfd89833a6" address="unix:///run/containerd/s/82ad45771f8515c1c380bc8b249a10a5518622c1ec1b8d7dfd54393183832080" protocol=ttrpc version=3
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.368521124Z" level=info msg="StartContainer for \"be0d0f11473938dfef6cae268048f1ff3460754238c85a91abf307dfd89833a6\" returns successfully"
	
	
	==> coredns [0b9c87419f31d22d32fb5aa8dd18a375dbaf6ff804f443acb1a68acc4e869129] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52829 - 49928 "HINFO IN 7672509729958589229.4000050543870758584. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021950058s
	
	
	==> describe nodes <==
	Name:               no-preload-483142
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-483142
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=no-preload-483142
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_33_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:33:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-483142
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:33:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:33:58 +0000   Wed, 19 Nov 2025 02:33:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:33:58 +0000   Wed, 19 Nov 2025 02:33:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:33:58 +0000   Wed, 19 Nov 2025 02:33:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:33:58 +0000   Wed, 19 Nov 2025 02:33:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-483142
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                91633eb1-f17c-4bd0-a804-d3558c3c2246
	  Boot ID:                    fea1659d-b751-4f87-a281-819adf52de2d
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-zgfk9                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-483142                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-6nr7d                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-483142             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-483142    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-xhrdt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-483142             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  32s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node no-preload-483142 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node no-preload-483142 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node no-preload-483142 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node no-preload-483142 event: Registered Node no-preload-483142 in Controller
	  Normal  NodeReady                12s   kubelet          Node no-preload-483142 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[Nov19 02:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 74 0c d7 a6 53 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[ +28.680399] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 e9 7c 92 36 13 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[Nov19 02:32] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	[  +4.552839] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +11.086189] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 76 d1 26 7f 3d 08 06
	[  +0.000377] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[  +9.270754] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a2 49 fd 34 51 3b 08 06
	[  +0.000702] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +23.593864] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 86 43 5f 18 4c 08 06
	[  +0.000495] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	
	
	==> etcd [970216c90257f7f960253c399883e66d480f482f82f594baee0af9c0f9d16d2b] <==
	{"level":"info","ts":"2025-11-19T02:33:24.881949Z","caller":"traceutil/trace.go:172","msg":"trace[1606530972] transaction","detail":"{read_only:false; response_revision:24; number_of_response:1; }","duration":"268.997058ms","start":"2025-11-19T02:33:24.612811Z","end":"2025-11-19T02:33:24.881808Z","steps":["trace[1606530972] 'process raft request'  (duration: 268.346969ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:24.882312Z","caller":"traceutil/trace.go:172","msg":"trace[1846514929] transaction","detail":"{read_only:false; response_revision:30; number_of_response:1; }","duration":"268.639917ms","start":"2025-11-19T02:33:24.613581Z","end":"2025-11-19T02:33:24.882221Z","steps":["trace[1846514929] 'process raft request'  (duration: 267.932284ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:24.882343Z","caller":"traceutil/trace.go:172","msg":"trace[298036101] transaction","detail":"{read_only:false; response_revision:28; number_of_response:1; }","duration":"269.347669ms","start":"2025-11-19T02:33:24.612977Z","end":"2025-11-19T02:33:24.882325Z","steps":["trace[298036101] 'process raft request'  (duration: 268.464896ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:24.882588Z","caller":"traceutil/trace.go:172","msg":"trace[638825018] transaction","detail":"{read_only:false; response_revision:27; number_of_response:1; }","duration":"269.431675ms","start":"2025-11-19T02:33:24.612961Z","end":"2025-11-19T02:33:24.882393Z","steps":["trace[638825018] 'process raft request'  (duration: 268.442306ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:24.881604Z","caller":"traceutil/trace.go:172","msg":"trace[575219125] transaction","detail":"{read_only:false; response_revision:25; number_of_response:1; }","duration":"268.723372ms","start":"2025-11-19T02:33:24.612867Z","end":"2025-11-19T02:33:24.881590Z","steps":["trace[575219125] 'process raft request'  (duration: 268.339326ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:24.882486Z","caller":"traceutil/trace.go:172","msg":"trace[1999947741] transaction","detail":"{read_only:false; response_revision:29; number_of_response:1; }","duration":"269.452216ms","start":"2025-11-19T02:33:24.613004Z","end":"2025-11-19T02:33:24.882456Z","steps":["trace[1999947741] 'process raft request'  (duration: 268.480863ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:33:25.129994Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.707213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-19T02:33:25.130060Z","caller":"traceutil/trace.go:172","msg":"trace[415743726] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:38; }","duration":"136.776355ms","start":"2025-11-19T02:33:24.993265Z","end":"2025-11-19T02:33:25.130041Z","steps":["trace[415743726] 'agreement among raft nodes before linearized reading'  (duration: 58.062813ms)","trace[415743726] 'range keys from in-memory index tree'  (duration: 78.610329ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:33:25.130132Z","caller":"traceutil/trace.go:172","msg":"trace[1491601123] transaction","detail":"{read_only:false; response_revision:41; number_of_response:1; }","duration":"204.551424ms","start":"2025-11-19T02:33:24.925566Z","end":"2025-11-19T02:33:25.130118Z","steps":["trace[1491601123] 'process raft request'  (duration: 204.517873ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:25.130174Z","caller":"traceutil/trace.go:172","msg":"trace[174907215] transaction","detail":"{read_only:false; response_revision:39; number_of_response:1; }","duration":"227.479242ms","start":"2025-11-19T02:33:24.902675Z","end":"2025-11-19T02:33:25.130155Z","steps":["trace[174907215] 'process raft request'  (duration: 148.721051ms)","trace[174907215] 'compare'  (duration: 78.473506ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:33:25.130212Z","caller":"traceutil/trace.go:172","msg":"trace[1794256140] transaction","detail":"{read_only:false; response_revision:40; number_of_response:1; }","duration":"204.979965ms","start":"2025-11-19T02:33:24.925218Z","end":"2025-11-19T02:33:25.130198Z","steps":["trace[1794256140] 'process raft request'  (duration: 204.830067ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:33:25.129982Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.68032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-19T02:33:25.130286Z","caller":"traceutil/trace.go:172","msg":"trace[1015112124] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:0; response_revision:38; }","duration":"137.007799ms","start":"2025-11-19T02:33:24.993265Z","end":"2025-11-19T02:33:25.130272Z","steps":["trace[1015112124] 'agreement among raft nodes before linearized reading'  (duration: 58.06699ms)","trace[1015112124] 'range keys from in-memory index tree'  (duration: 78.571179ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:33:25.256425Z","caller":"traceutil/trace.go:172","msg":"trace[205379758] linearizableReadLoop","detail":"{readStateIndex:45; appliedIndex:45; }","duration":"123.419246ms","start":"2025-11-19T02:33:25.132983Z","end":"2025-11-19T02:33:25.256402Z","steps":["trace[205379758] 'read index received'  (duration: 123.411377ms)","trace[205379758] 'applied index is now lower than readState.Index'  (duration: 6.576µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:33:25.412798Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"279.785142ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-19T02:33:25.412876Z","caller":"traceutil/trace.go:172","msg":"trace[1977849679] range","detail":"{range_begin:/registry/clusterrolebindings; range_end:; response_count:0; response_revision:41; }","duration":"279.877345ms","start":"2025-11-19T02:33:25.132979Z","end":"2025-11-19T02:33:25.412857Z","steps":["trace[1977849679] 'agreement among raft nodes before linearized reading'  (duration: 123.49662ms)","trace[1977849679] 'range keys from in-memory index tree'  (duration: 156.242618ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:33:25.412876Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.309139ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356742132085433 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/priorityclasses/system-node-critical\" mod_revision:0 > success:<request_put:<key:\"/registry/priorityclasses/system-node-critical\" value_size:375 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-19T02:33:25.413000Z","caller":"traceutil/trace.go:172","msg":"trace[1474665022] transaction","detail":"{read_only:false; response_revision:43; number_of_response:1; }","duration":"279.992796ms","start":"2025-11-19T02:33:25.132997Z","end":"2025-11-19T02:33:25.412990Z","steps":["trace[1474665022] 'process raft request'  (duration: 279.94436ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:25.413035Z","caller":"traceutil/trace.go:172","msg":"trace[812418682] transaction","detail":"{read_only:false; response_revision:42; number_of_response:1; }","duration":"280.074687ms","start":"2025-11-19T02:33:25.132941Z","end":"2025-11-19T02:33:25.413016Z","steps":["trace[812418682] 'process raft request'  (duration: 123.578199ms)","trace[812418682] 'compare'  (duration: 156.203517ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:33:25.534017Z","caller":"traceutil/trace.go:172","msg":"trace[391540234] transaction","detail":"{read_only:false; response_revision:45; number_of_response:1; }","duration":"116.631277ms","start":"2025-11-19T02:33:25.417358Z","end":"2025-11-19T02:33:25.533990Z","steps":["trace[391540234] 'process raft request'  (duration: 108.358032ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:25.672004Z","caller":"traceutil/trace.go:172","msg":"trace[1817067666] linearizableReadLoop","detail":"{readStateIndex:53; appliedIndex:53; }","duration":"128.629837ms","start":"2025-11-19T02:33:25.543346Z","end":"2025-11-19T02:33:25.671976Z","steps":["trace[1817067666] 'read index received'  (duration: 128.620673ms)","trace[1817067666] 'applied index is now lower than readState.Index'  (duration: 7.196µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:33:25.719384Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"175.987356ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:discovery\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-19T02:33:25.719442Z","caller":"traceutil/trace.go:172","msg":"trace[2070525438] range","detail":"{range_begin:/registry/clusterrolebindings/system:discovery; range_end:; response_count:0; response_revision:49; }","duration":"176.085993ms","start":"2025-11-19T02:33:25.543343Z","end":"2025-11-19T02:33:25.719429Z","steps":["trace[2070525438] 'agreement among raft nodes before linearized reading'  (duration: 128.713297ms)","trace[2070525438] 'range keys from in-memory index tree'  (duration: 47.24005ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:33:25.719542Z","caller":"traceutil/trace.go:172","msg":"trace[398624538] transaction","detail":"{read_only:false; response_revision:51; number_of_response:1; }","duration":"176.068531ms","start":"2025-11-19T02:33:25.543461Z","end":"2025-11-19T02:33:25.719529Z","steps":["trace[398624538] 'process raft request'  (duration: 176.016482ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:25.719591Z","caller":"traceutil/trace.go:172","msg":"trace[1597272678] transaction","detail":"{read_only:false; response_revision:50; number_of_response:1; }","duration":"176.721113ms","start":"2025-11-19T02:33:25.542855Z","end":"2025-11-19T02:33:25.719576Z","steps":["trace[1597272678] 'process raft request'  (duration: 129.212648ms)","trace[1597272678] 'compare'  (duration: 47.272472ms)"],"step_count":2}
	
	
	==> kernel <==
	 02:33:59 up  1:16,  0 user,  load average: 5.23, 3.87, 2.57
	Linux no-preload-483142 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1cfb54b0c3a9c9af136708d47f32e740d3be7a3c880089823957ef677c8fe86f] <==
	I1119 02:33:36.966187       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:33:36.966484       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 02:33:36.966659       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:33:36.966677       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:33:36.966696       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:33:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:33:37.238181       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:33:37.238240       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:33:37.238257       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:33:37.238471       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:33:37.566878       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:33:37.567659       1 metrics.go:72] Registering metrics
	I1119 02:33:37.567762       1 controller.go:711] "Syncing nftables rules"
	I1119 02:33:47.239514       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 02:33:47.239597       1 main.go:301] handling current node
	I1119 02:33:57.241505       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 02:33:57.241579       1 main.go:301] handling current node
	
	
	==> kube-apiserver [25547ba51e3d7c9f5bc5ed922ef41fd7a5df8f804993f19ee0905141242cb4cf] <==
	I1119 02:33:24.195277       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:33:24.323747       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:33:24.323979       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1119 02:33:24.325495       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1119 02:33:24.532144       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:33:24.610555       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:33:24.611021       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:33:25.414182       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 02:33:25.535166       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 02:33:25.535187       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:33:26.378666       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:33:26.422647       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:33:26.503195       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 02:33:26.512716       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 02:33:26.514607       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:33:26.521465       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:33:27.188520       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:33:27.551547       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:33:27.562002       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 02:33:27.571819       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 02:33:33.033678       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 02:33:33.138799       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:33:33.144542       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:33:33.287883       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1119 02:33:58.762772       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:53672: use of closed network connection
	
	
	==> kube-controller-manager [0de9dc5d78d1d9e5fe1c4cae9915420a9f24698374794e8d118dbb18a86cb552] <==
	I1119 02:33:32.182705       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 02:33:32.188146       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:33:32.195427       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 02:33:32.204790       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 02:33:32.214008       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 02:33:32.223190       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 02:33:32.227659       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:33:32.229621       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 02:33:32.229739       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 02:33:32.231124       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 02:33:32.231161       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 02:33:32.231172       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 02:33:32.231240       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 02:33:32.231263       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 02:33:32.231342       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 02:33:32.231435       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-483142"
	I1119 02:33:32.231488       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 02:33:32.231582       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 02:33:32.231665       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 02:33:32.231742       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 02:33:32.237394       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:33:32.251888       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:33:32.251932       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:33:32.251943       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 02:33:52.255959       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5b1ec14d6e4ffc1edfbf9bb231d10fa97672c82ed93b0b16806ac5696dbc5fe3] <==
	I1119 02:33:33.746401       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:33:33.820507       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:33:33.920825       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:33:33.920865       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 02:33:33.920995       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:33:33.943531       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:33:33.943605       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:33:33.949092       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:33:33.949644       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:33:33.949679       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:33:33.951227       1 config.go:200] "Starting service config controller"
	I1119 02:33:33.951260       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:33:33.951318       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:33:33.951339       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:33:33.951433       1 config.go:309] "Starting node config controller"
	I1119 02:33:33.951441       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:33:33.951488       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:33:33.951496       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:33:34.051499       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:33:34.051519       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:33:34.051563       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 02:33:34.052891       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [73f8e07d520179cb7921a1b4c5c25d67a1e7829441086a765ef18720b414840f] <==
	E1119 02:33:24.054677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:33:24.053896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:33:24.054651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:33:24.054387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 02:33:24.054004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:33:24.921306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:33:24.956917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:33:24.987752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:33:25.004199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 02:33:25.035732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:33:25.049337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 02:33:25.100939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:33:25.148640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 02:33:25.254898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 02:33:25.266492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:33:25.406294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:33:25.422551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:33:25.444105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:33:25.531131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 02:33:25.533269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:33:25.539332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 02:33:25.592103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:33:25.601756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:33:25.601771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1119 02:33:28.350134       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:33:28 no-preload-483142 kubelet[2180]: I1119 02:33:28.477548    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-483142" podStartSLOduration=2.477521533 podStartE2EDuration="2.477521533s" podCreationTimestamp="2025-11-19 02:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:28.464881014 +0000 UTC m=+1.135253586" watchObservedRunningTime="2025-11-19 02:33:28.477521533 +0000 UTC m=+1.147894086"
	Nov 19 02:33:28 no-preload-483142 kubelet[2180]: I1119 02:33:28.492092    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-483142" podStartSLOduration=1.492064477 podStartE2EDuration="1.492064477s" podCreationTimestamp="2025-11-19 02:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:28.478516323 +0000 UTC m=+1.148888895" watchObservedRunningTime="2025-11-19 02:33:28.492064477 +0000 UTC m=+1.162437063"
	Nov 19 02:33:28 no-preload-483142 kubelet[2180]: I1119 02:33:28.502589    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-483142" podStartSLOduration=2.502568025 podStartE2EDuration="2.502568025s" podCreationTimestamp="2025-11-19 02:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:28.492333574 +0000 UTC m=+1.162706144" watchObservedRunningTime="2025-11-19 02:33:28.502568025 +0000 UTC m=+1.172940597"
	Nov 19 02:33:28 no-preload-483142 kubelet[2180]: I1119 02:33:28.515921    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-483142" podStartSLOduration=1.515895452 podStartE2EDuration="1.515895452s" podCreationTimestamp="2025-11-19 02:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:28.502753844 +0000 UTC m=+1.173126419" watchObservedRunningTime="2025-11-19 02:33:28.515895452 +0000 UTC m=+1.186268096"
	Nov 19 02:33:32 no-preload-483142 kubelet[2180]: I1119 02:33:32.178105    2180 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 02:33:32 no-preload-483142 kubelet[2180]: I1119 02:33:32.178925    2180 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 02:33:33 no-preload-483142 kubelet[2180]: I1119 02:33:33.142843    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ed3d00d-7760-4eed-af73-abf314cf5901-lib-modules\") pod \"kube-proxy-xhrdt\" (UID: \"2ed3d00d-7760-4eed-af73-abf314cf5901\") " pod="kube-system/kube-proxy-xhrdt"
	Nov 19 02:33:33 no-preload-483142 kubelet[2180]: I1119 02:33:33.142893    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6bf7df0-8af6-4156-990c-6f70cc159a8c-xtables-lock\") pod \"kindnet-6nr7d\" (UID: \"b6bf7df0-8af6-4156-990c-6f70cc159a8c\") " pod="kube-system/kindnet-6nr7d"
	Nov 19 02:33:33 no-preload-483142 kubelet[2180]: I1119 02:33:33.142927    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ed3d00d-7760-4eed-af73-abf314cf5901-kube-proxy\") pod \"kube-proxy-xhrdt\" (UID: \"2ed3d00d-7760-4eed-af73-abf314cf5901\") " pod="kube-system/kube-proxy-xhrdt"
	Nov 19 02:33:33 no-preload-483142 kubelet[2180]: I1119 02:33:33.142950    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ed3d00d-7760-4eed-af73-abf314cf5901-xtables-lock\") pod \"kube-proxy-xhrdt\" (UID: \"2ed3d00d-7760-4eed-af73-abf314cf5901\") " pod="kube-system/kube-proxy-xhrdt"
	Nov 19 02:33:33 no-preload-483142 kubelet[2180]: I1119 02:33:33.142980    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5wls\" (UniqueName: \"kubernetes.io/projected/2ed3d00d-7760-4eed-af73-abf314cf5901-kube-api-access-c5wls\") pod \"kube-proxy-xhrdt\" (UID: \"2ed3d00d-7760-4eed-af73-abf314cf5901\") " pod="kube-system/kube-proxy-xhrdt"
	Nov 19 02:33:33 no-preload-483142 kubelet[2180]: I1119 02:33:33.143004    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b6bf7df0-8af6-4156-990c-6f70cc159a8c-cni-cfg\") pod \"kindnet-6nr7d\" (UID: \"b6bf7df0-8af6-4156-990c-6f70cc159a8c\") " pod="kube-system/kindnet-6nr7d"
	Nov 19 02:33:33 no-preload-483142 kubelet[2180]: I1119 02:33:33.143030    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lvff\" (UniqueName: \"kubernetes.io/projected/b6bf7df0-8af6-4156-990c-6f70cc159a8c-kube-api-access-9lvff\") pod \"kindnet-6nr7d\" (UID: \"b6bf7df0-8af6-4156-990c-6f70cc159a8c\") " pod="kube-system/kindnet-6nr7d"
	Nov 19 02:33:33 no-preload-483142 kubelet[2180]: I1119 02:33:33.143053    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6bf7df0-8af6-4156-990c-6f70cc159a8c-lib-modules\") pod \"kindnet-6nr7d\" (UID: \"b6bf7df0-8af6-4156-990c-6f70cc159a8c\") " pod="kube-system/kindnet-6nr7d"
	Nov 19 02:33:34 no-preload-483142 kubelet[2180]: I1119 02:33:34.476734    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xhrdt" podStartSLOduration=1.476710844 podStartE2EDuration="1.476710844s" podCreationTimestamp="2025-11-19 02:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:34.476575537 +0000 UTC m=+7.146948109" watchObservedRunningTime="2025-11-19 02:33:34.476710844 +0000 UTC m=+7.147083417"
	Nov 19 02:33:37 no-preload-483142 kubelet[2180]: I1119 02:33:37.530258    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-6nr7d" podStartSLOduration=1.7262758969999998 podStartE2EDuration="4.530234192s" podCreationTimestamp="2025-11-19 02:33:33 +0000 UTC" firstStartedPulling="2025-11-19 02:33:33.85118927 +0000 UTC m=+6.521561831" lastFinishedPulling="2025-11-19 02:33:36.655147571 +0000 UTC m=+9.325520126" observedRunningTime="2025-11-19 02:33:37.512050093 +0000 UTC m=+10.182422667" watchObservedRunningTime="2025-11-19 02:33:37.530234192 +0000 UTC m=+10.200606764"
	Nov 19 02:33:47 no-preload-483142 kubelet[2180]: I1119 02:33:47.338948    2180 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 02:33:47 no-preload-483142 kubelet[2180]: I1119 02:33:47.444723    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d24a51-2fec-4ae7-852e-c65aef957597-config-volume\") pod \"coredns-66bc5c9577-zgfk9\" (UID: \"a3d24a51-2fec-4ae7-852e-c65aef957597\") " pod="kube-system/coredns-66bc5c9577-zgfk9"
	Nov 19 02:33:47 no-preload-483142 kubelet[2180]: I1119 02:33:47.444780    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcf2t\" (UniqueName: \"kubernetes.io/projected/a3d24a51-2fec-4ae7-852e-c65aef957597-kube-api-access-gcf2t\") pod \"coredns-66bc5c9577-zgfk9\" (UID: \"a3d24a51-2fec-4ae7-852e-c65aef957597\") " pod="kube-system/coredns-66bc5c9577-zgfk9"
	Nov 19 02:33:47 no-preload-483142 kubelet[2180]: I1119 02:33:47.444815    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c66a6926-3a4a-4aa9-b40b-349e1b056683-tmp\") pod \"storage-provisioner\" (UID: \"c66a6926-3a4a-4aa9-b40b-349e1b056683\") " pod="kube-system/storage-provisioner"
	Nov 19 02:33:47 no-preload-483142 kubelet[2180]: I1119 02:33:47.444844    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb749\" (UniqueName: \"kubernetes.io/projected/c66a6926-3a4a-4aa9-b40b-349e1b056683-kube-api-access-mb749\") pod \"storage-provisioner\" (UID: \"c66a6926-3a4a-4aa9-b40b-349e1b056683\") " pod="kube-system/storage-provisioner"
	Nov 19 02:33:48 no-preload-483142 kubelet[2180]: I1119 02:33:48.526946    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zgfk9" podStartSLOduration=15.526918148 podStartE2EDuration="15.526918148s" podCreationTimestamp="2025-11-19 02:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:48.51253374 +0000 UTC m=+21.182906312" watchObservedRunningTime="2025-11-19 02:33:48.526918148 +0000 UTC m=+21.197290723"
	Nov 19 02:33:50 no-preload-483142 kubelet[2180]: I1119 02:33:50.646540    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.646511446 podStartE2EDuration="17.646511446s" podCreationTimestamp="2025-11-19 02:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:48.540054968 +0000 UTC m=+21.210427551" watchObservedRunningTime="2025-11-19 02:33:50.646511446 +0000 UTC m=+23.316884019"
	Nov 19 02:33:50 no-preload-483142 kubelet[2180]: I1119 02:33:50.766532    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgrxl\" (UniqueName: \"kubernetes.io/projected/90b24763-24ed-4631-9502-e0fab55d3520-kube-api-access-hgrxl\") pod \"busybox\" (UID: \"90b24763-24ed-4631-9502-e0fab55d3520\") " pod="default/busybox"
	Nov 19 02:33:53 no-preload-483142 kubelet[2180]: I1119 02:33:53.526359    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.323886287 podStartE2EDuration="3.526338888s" podCreationTimestamp="2025-11-19 02:33:50 +0000 UTC" firstStartedPulling="2025-11-19 02:33:51.085841079 +0000 UTC m=+23.756213644" lastFinishedPulling="2025-11-19 02:33:53.288293691 +0000 UTC m=+25.958666245" observedRunningTime="2025-11-19 02:33:53.526278149 +0000 UTC m=+26.196650722" watchObservedRunningTime="2025-11-19 02:33:53.526338888 +0000 UTC m=+26.196711460"
	
	
	==> storage-provisioner [535511cf0eb8e278c6a97e248917117a13744f6d36d6b63bef86d79ddfc7c849] <==
	I1119 02:33:47.928423       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:33:47.942405       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:33:47.942456       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 02:33:47.946668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:47.954182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:33:47.954499       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:33:47.954693       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-483142_ef654051-7e72-427c-b5e5-25db73824692!
	I1119 02:33:47.954591       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e5d4fb73-95d8-4c7f-b8d8-87d764024a0e", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-483142_ef654051-7e72-427c-b5e5-25db73824692 became leader
	W1119 02:33:47.961037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:47.966350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:33:48.055038       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-483142_ef654051-7e72-427c-b5e5-25db73824692!
	W1119 02:33:49.969729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:49.974968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:51.978327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:51.982416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:53.985468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:53.990011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:55.993051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:55.998141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:58.002201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:58.006778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:00.010442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:00.016713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-483142 -n no-preload-483142
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-483142 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-483142
helpers_test.go:243: (dbg) docker inspect no-preload-483142:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aac37d788f49958ff52dac6090e9aa2cd11fb1e54edad896420bf3f5e737a0af",
	        "Created": "2025-11-19T02:32:57.689763981Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 307739,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:32:57.726804627Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/aac37d788f49958ff52dac6090e9aa2cd11fb1e54edad896420bf3f5e737a0af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aac37d788f49958ff52dac6090e9aa2cd11fb1e54edad896420bf3f5e737a0af/hostname",
	        "HostsPath": "/var/lib/docker/containers/aac37d788f49958ff52dac6090e9aa2cd11fb1e54edad896420bf3f5e737a0af/hosts",
	        "LogPath": "/var/lib/docker/containers/aac37d788f49958ff52dac6090e9aa2cd11fb1e54edad896420bf3f5e737a0af/aac37d788f49958ff52dac6090e9aa2cd11fb1e54edad896420bf3f5e737a0af-json.log",
	        "Name": "/no-preload-483142",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-483142:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-483142",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "aac37d788f49958ff52dac6090e9aa2cd11fb1e54edad896420bf3f5e737a0af",
	                "LowerDir": "/var/lib/docker/overlay2/bb47bc624d7fa5f37525774a3ae9ce17221988bf05df4a93a6cf6eb317eb354d-init/diff:/var/lib/docker/overlay2/de7938e6a920c133c8c6b988444cfbf6706fdc6982445229ca70e2488a725edb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb47bc624d7fa5f37525774a3ae9ce17221988bf05df4a93a6cf6eb317eb354d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb47bc624d7fa5f37525774a3ae9ce17221988bf05df4a93a6cf6eb317eb354d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb47bc624d7fa5f37525774a3ae9ce17221988bf05df4a93a6cf6eb317eb354d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-483142",
	                "Source": "/var/lib/docker/volumes/no-preload-483142/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-483142",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-483142",
	                "name.minikube.sigs.k8s.io": "no-preload-483142",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c4ed69048ab3ecdf23b4ad8f556ed685f28d61dfda86ca9e242cdd1d08140c5a",
	            "SandboxKey": "/var/run/docker/netns/c4ed69048ab3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-483142": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c1155ea75a9420d95538eab4308c63c41e9b9b6daf36899badbd1b70df2e1f7a",
	                    "EndpointID": "58d34fb6835283be599234219418cf59aeb02160d91eb2865b3d13090e612999",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "8a:e5:53:d9:83:73",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-483142",
	                        "aac37d788f49"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-483142 -n no-preload-483142
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-483142 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-483142 logs -n 25: (1.126323196s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-212776 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                        │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/kubernetes/kubelet.conf                                                                                                       │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /var/lib/kubelet/config.yaml                                                                                                       │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status docker --all --full --no-pager                                                                                        │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat docker --no-pager                                                                                                        │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/docker/daemon.json                                                                                                            │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo docker system info                                                                                                                     │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl status cri-docker --all --full --no-pager                                                                                    │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat cri-docker --no-pager                                                                                                    │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                               │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                         │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cri-dockerd --version                                                                                                                  │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status containerd --all --full --no-pager                                                                                    │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl cat containerd --no-pager                                                                                                    │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /lib/systemd/system/containerd.service                                                                                             │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/containerd/config.toml                                                                                                        │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo containerd config dump                                                                                                                 │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status crio --all --full --no-pager                                                                                          │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat crio --no-pager                                                                                                          │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo crio config                                                                                                                            │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ delete  │ -p bridge-212776                                                                                                                                             │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ start   │ -p embed-certs-168452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ embed-certs-168452     │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-691094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                 │ old-k8s-version-691094 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ stop    │ -p old-k8s-version-691094 --alsologtostderr -v=3                                                                                                             │ old-k8s-version-691094 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:33:19
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:33:19.818158  315363 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:33:19.818478  315363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:33:19.818490  315363 out.go:374] Setting ErrFile to fd 2...
	I1119 02:33:19.818495  315363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:33:19.818721  315363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:33:19.819330  315363 out.go:368] Setting JSON to false
	I1119 02:33:19.820616  315363 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4540,"bootTime":1763515060,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:33:19.820746  315363 start.go:143] virtualization: kvm guest
	I1119 02:33:19.822862  315363 out.go:179] * [embed-certs-168452] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:33:19.824498  315363 notify.go:221] Checking for updates...
	I1119 02:33:19.825083  315363 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:33:19.827189  315363 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:33:19.828628  315363 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:33:19.830282  315363 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	I1119 02:33:19.832156  315363 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:33:19.833558  315363 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:33:19.835289  315363 config.go:182] Loaded profile config "kubernetes-upgrade-896338": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:19.835456  315363 config.go:182] Loaded profile config "no-preload-483142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:19.835531  315363 config.go:182] Loaded profile config "old-k8s-version-691094": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 02:33:19.835628  315363 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:33:19.869670  315363 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:33:19.869754  315363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:33:19.948056  315363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-19 02:33:19.935291829 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:33:19.948230  315363 docker.go:319] overlay module found
	I1119 02:33:19.949713  315363 out.go:179] * Using the docker driver based on user configuration
	I1119 02:33:19.290831  301934 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:19.290855  301934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:33:19.290915  301934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:33:19.311399  301934 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:19.311423  301934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:33:19.311589  301934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:33:19.329209  301934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:33:19.348646  301934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:33:19.386878  301934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:33:19.430928  301934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:19.450594  301934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:19.476197  301934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:19.710133  301934 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1119 02:33:19.711417  301934 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-691094" to be "Ready" ...
	I1119 02:33:19.994360  301934 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:33:19.950788  315363 start.go:309] selected driver: docker
	I1119 02:33:19.950820  315363 start.go:930] validating driver "docker" against <nil>
	I1119 02:33:19.950835  315363 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:33:19.951688  315363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:33:20.027806  315363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-19 02:33:20.015781927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:33:20.028020  315363 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 02:33:20.028315  315363 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:33:20.030421  315363 out.go:179] * Using Docker driver with root privileges
	I1119 02:33:20.031895  315363 cni.go:84] Creating CNI manager for ""
	I1119 02:33:20.031986  315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:20.031997  315363 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:33:20.032066  315363 start.go:353] cluster config:
	{Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:33:20.034765  315363 out.go:179] * Starting "embed-certs-168452" primary control-plane node in "embed-certs-168452" cluster
	I1119 02:33:20.037487  315363 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 02:33:20.039029  315363 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:33:20.040485  315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:33:20.040520  315363 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1119 02:33:20.040528  315363 cache.go:65] Caching tarball of preloaded images
	I1119 02:33:20.040583  315363 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:33:20.040607  315363 preload.go:238] Found /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1119 02:33:20.040616  315363 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 02:33:20.040718  315363 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json ...
	I1119 02:33:20.040739  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json: {Name:mk2c1cb92572f9f7372f9d895b2c58b49c99bb3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:20.063579  315363 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:33:20.063610  315363 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:33:20.063636  315363 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:33:20.063670  315363 start.go:360] acquireMachinesLock for embed-certs-168452: {Name:mk4860299f8ff219c79992500844e49d455bd43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:33:20.063790  315363 start.go:364] duration metric: took 102.461µs to acquireMachinesLock for "embed-certs-168452"
	I1119 02:33:20.063835  315363 start.go:93] Provisioning new machine with config: &{Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:33:20.063944  315363 start.go:125] createHost starting for "" (driver="docker")
	I1119 02:33:19.995882  301934 addons.go:515] duration metric: took 741.418352ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:33:20.065989  315363 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 02:33:20.066193  315363 start.go:159] libmachine.API.Create for "embed-certs-168452" (driver="docker")
	I1119 02:33:20.066226  315363 client.go:173] LocalClient.Create starting
	I1119 02:33:20.066306  315363 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem
	I1119 02:33:20.066338  315363 main.go:143] libmachine: Decoding PEM data...
	I1119 02:33:20.066360  315363 main.go:143] libmachine: Parsing certificate...
	I1119 02:33:20.066438  315363 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem
	I1119 02:33:20.066464  315363 main.go:143] libmachine: Decoding PEM data...
	I1119 02:33:20.066475  315363 main.go:143] libmachine: Parsing certificate...
	I1119 02:33:20.066835  315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 02:33:20.087889  315363 cli_runner.go:211] docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 02:33:20.087987  315363 network_create.go:284] running [docker network inspect embed-certs-168452] to gather additional debugging logs...
	I1119 02:33:20.088020  315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452
	W1119 02:33:20.108512  315363 cli_runner.go:211] docker network inspect embed-certs-168452 returned with exit code 1
	I1119 02:33:20.108553  315363 network_create.go:287] error running [docker network inspect embed-certs-168452]: docker network inspect embed-certs-168452: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-168452 not found
	I1119 02:33:20.108577  315363 network_create.go:289] output of [docker network inspect embed-certs-168452]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-168452 not found
	
	** /stderr **
	I1119 02:33:20.108677  315363 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:33:20.129985  315363 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ed39016f2aa9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:16:a0:62:5a:51} reservation:<nil>}
	I1119 02:33:20.130640  315363 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-42b0c19d513b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b2:bf:ca:ce:21:95} reservation:<nil>}
	I1119 02:33:20.131454  315363 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-002e39e6dc05 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:8e:34:24:50:a5} reservation:<nil>}
	I1119 02:33:20.132210  315363 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c1155ea75a94 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:76:37:ad:5a:d8:36} reservation:<nil>}
	I1119 02:33:20.133253  315363 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-3ec6f45a7001 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:12:9a:69:49:8b:1f} reservation:<nil>}
	I1119 02:33:20.134343  315363 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ddf580}
	I1119 02:33:20.134393  315363 network_create.go:124] attempt to create docker network embed-certs-168452 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1119 02:33:20.134459  315363 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-168452 embed-certs-168452
	I1119 02:33:20.192566  315363 network_create.go:108] docker network embed-certs-168452 192.168.94.0/24 created
	I1119 02:33:20.192597  315363 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-168452" container
	I1119 02:33:20.192665  315363 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 02:33:20.216991  315363 cli_runner.go:164] Run: docker volume create embed-certs-168452 --label name.minikube.sigs.k8s.io=embed-certs-168452 --label created_by.minikube.sigs.k8s.io=true
	I1119 02:33:20.240868  315363 oci.go:103] Successfully created a docker volume embed-certs-168452
	I1119 02:33:20.240948  315363 cli_runner.go:164] Run: docker run --rm --name embed-certs-168452-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-168452 --entrypoint /usr/bin/test -v embed-certs-168452:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 02:33:20.653772  315363 oci.go:107] Successfully prepared a docker volume embed-certs-168452
	I1119 02:33:20.653851  315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:33:20.653866  315363 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 02:33:20.653963  315363 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-168452:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 02:33:20.215687  301934 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-691094" context rescaled to 1 replicas
	W1119 02:33:21.715210  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	W1119 02:33:24.323644  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	I1119 02:33:28.147893  307222 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:33:28.147982  307222 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:33:28.148104  307222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:33:28.148201  307222 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:33:28.148256  307222 kubeadm.go:319] OS: Linux
	I1119 02:33:28.148332  307222 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:33:28.148450  307222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:33:28.148522  307222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:33:28.148596  307222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:33:28.148672  307222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:33:28.148756  307222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:33:28.148841  307222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:33:28.148915  307222 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:33:28.149019  307222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:33:28.149159  307222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:33:28.149311  307222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:33:28.149421  307222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:33:28.151537  307222 out.go:252]   - Generating certificates and keys ...
	I1119 02:33:28.151647  307222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:33:28.151774  307222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:33:28.151834  307222 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:33:28.151902  307222 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:33:28.152000  307222 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:33:28.152068  307222 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:33:28.152179  307222 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:33:28.152343  307222 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-483142] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:33:28.152451  307222 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:33:28.152598  307222 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-483142] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:33:28.152690  307222 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:33:28.152796  307222 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:33:28.152837  307222 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:33:28.152894  307222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:33:28.152945  307222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:33:28.153002  307222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:33:28.153051  307222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:33:28.153118  307222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:33:28.153171  307222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:33:28.153255  307222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:33:28.153358  307222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:33:28.154609  307222 out.go:252]   - Booting up control plane ...
	I1119 02:33:28.154709  307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:33:28.154821  307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:33:28.154904  307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:33:28.155033  307222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:33:28.155173  307222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:33:28.155323  307222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:33:28.155456  307222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:33:28.155501  307222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:33:28.155631  307222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:33:28.155728  307222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:33:28.155805  307222 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001464049s
	I1119 02:33:28.155906  307222 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:33:28.156017  307222 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 02:33:28.156135  307222 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:33:28.156242  307222 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:33:28.156335  307222 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.319882231s
	I1119 02:33:28.156456  307222 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.432703999s
	I1119 02:33:28.156560  307222 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001475545s
	I1119 02:33:28.156685  307222 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:33:28.156832  307222 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:33:28.156917  307222 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:33:28.157202  307222 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-483142 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:33:28.157272  307222 kubeadm.go:319] [bootstrap-token] Using token: nwrx92.0c942uuundzydmcz
	I1119 02:33:28.159046  307222 out.go:252]   - Configuring RBAC rules ...
	I1119 02:33:28.159207  307222 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:33:28.159328  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:33:28.159549  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:33:28.159720  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:33:28.159922  307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:33:28.160077  307222 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:33:28.160254  307222 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:33:28.160329  307222 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:33:28.160427  307222 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:33:28.160443  307222 kubeadm.go:319] 
	I1119 02:33:28.160527  307222 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:33:28.160536  307222 kubeadm.go:319] 
	I1119 02:33:28.160603  307222 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:33:28.160610  307222 kubeadm.go:319] 
	I1119 02:33:28.160642  307222 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:33:28.160730  307222 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:33:28.160832  307222 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:33:28.160845  307222 kubeadm.go:319] 
	I1119 02:33:28.160922  307222 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:33:28.160942  307222 kubeadm.go:319] 
	I1119 02:33:28.161016  307222 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:33:28.161031  307222 kubeadm.go:319] 
	I1119 02:33:28.161114  307222 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:33:28.161229  307222 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:33:28.161347  307222 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:33:28.161359  307222 kubeadm.go:319] 
	I1119 02:33:28.161531  307222 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:33:28.161656  307222 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:33:28.161665  307222 kubeadm.go:319] 
	I1119 02:33:28.161797  307222 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token nwrx92.0c942uuundzydmcz \
	I1119 02:33:28.161968  307222 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a \
	I1119 02:33:28.162022  307222 kubeadm.go:319] 	--control-plane 
	I1119 02:33:28.162036  307222 kubeadm.go:319] 
	I1119 02:33:28.162163  307222 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:33:28.162174  307222 kubeadm.go:319] 
	I1119 02:33:28.162301  307222 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token nwrx92.0c942uuundzydmcz \
	I1119 02:33:28.162456  307222 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a 
	I1119 02:33:28.162469  307222 cni.go:84] Creating CNI manager for ""
	I1119 02:33:28.162475  307222 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:28.164382  307222 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:33:25.786283  315363 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-168452:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.132274902s)
	I1119 02:33:25.786322  315363 kic.go:203] duration metric: took 5.132452147s to extract preloaded images to volume ...
	W1119 02:33:25.786460  315363 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 02:33:25.786504  315363 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 02:33:25.786554  315363 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:33:25.853413  315363 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-168452 --name embed-certs-168452 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-168452 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-168452 --network embed-certs-168452 --ip 192.168.94.2 --volume embed-certs-168452:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:33:26.238651  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Running}}
	I1119 02:33:26.261169  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:26.284313  315363 cli_runner.go:164] Run: docker exec embed-certs-168452 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:33:26.336955  315363 oci.go:144] the created container "embed-certs-168452" has a running status.
	I1119 02:33:26.336985  315363 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa...
	I1119 02:33:26.484310  315363 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:33:26.517116  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:26.542901  315363 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:33:26.542925  315363 kic_runner.go:114] Args: [docker exec --privileged embed-certs-168452 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:33:26.595205  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:26.623359  315363 machine.go:94] provisionDockerMachine start ...
	I1119 02:33:26.623527  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:26.646254  315363 main.go:143] libmachine: Using SSH client type: native
	I1119 02:33:26.646550  315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1119 02:33:26.646569  315363 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:33:26.799221  315363 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-168452
	
	I1119 02:33:26.799250  315363 ubuntu.go:182] provisioning hostname "embed-certs-168452"
	I1119 02:33:26.799334  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:26.820929  315363 main.go:143] libmachine: Using SSH client type: native
	I1119 02:33:26.821188  315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1119 02:33:26.821210  315363 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-168452 && echo "embed-certs-168452" | sudo tee /etc/hostname
	I1119 02:33:26.966035  315363 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-168452
	
	I1119 02:33:26.966125  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:26.985276  315363 main.go:143] libmachine: Using SSH client type: native
	I1119 02:33:26.985598  315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33105 <nil> <nil>}
	I1119 02:33:26.985633  315363 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-168452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-168452/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-168452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:33:27.121670  315363 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:33:27.121703  315363 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11107/.minikube}
	I1119 02:33:27.121727  315363 ubuntu.go:190] setting up certificates
	I1119 02:33:27.123000  315363 provision.go:84] configureAuth start
	I1119 02:33:27.123195  315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
	I1119 02:33:27.143490  315363 provision.go:143] copyHostCerts
	I1119 02:33:27.143570  315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem, removing ...
	I1119 02:33:27.143580  315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem
	I1119 02:33:27.143645  315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem (1082 bytes)
	I1119 02:33:27.143736  315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem, removing ...
	I1119 02:33:27.143744  315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem
	I1119 02:33:27.143773  315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem (1123 bytes)
	I1119 02:33:27.143829  315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem, removing ...
	I1119 02:33:27.143835  315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem
	I1119 02:33:27.143858  315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem (1675 bytes)
	I1119 02:33:27.143923  315363 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem org=jenkins.embed-certs-168452 san=[127.0.0.1 192.168.94.2 embed-certs-168452 localhost minikube]
	I1119 02:33:27.239080  315363 provision.go:177] copyRemoteCerts
	I1119 02:33:27.239165  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:33:27.239198  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.262397  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.362967  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:33:27.387666  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1119 02:33:27.418735  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:33:27.446098  315363 provision.go:87] duration metric: took 323.082791ms to configureAuth
	I1119 02:33:27.446129  315363 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:33:27.446313  315363 config.go:182] Loaded profile config "embed-certs-168452": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:27.446327  315363 machine.go:97] duration metric: took 822.891862ms to provisionDockerMachine
	I1119 02:33:27.446333  315363 client.go:176] duration metric: took 7.38010166s to LocalClient.Create
	I1119 02:33:27.446351  315363 start.go:167] duration metric: took 7.380160884s to libmachine.API.Create "embed-certs-168452"
	I1119 02:33:27.446358  315363 start.go:293] postStartSetup for "embed-certs-168452" (driver="docker")
	I1119 02:33:27.446409  315363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:33:27.446465  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:33:27.446501  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.470807  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.575097  315363 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:33:27.580067  315363 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:33:27.580102  315363 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:33:27.580115  315363 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/addons for local assets ...
	I1119 02:33:27.580188  315363 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/files for local assets ...
	I1119 02:33:27.580303  315363 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem -> 146572.pem in /etc/ssl/certs
	I1119 02:33:27.580434  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:33:27.588848  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:33:27.611498  315363 start.go:296] duration metric: took 165.12815ms for postStartSetup
	I1119 02:33:27.611895  315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
	I1119 02:33:27.630987  315363 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json ...
	I1119 02:33:27.631276  315363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:33:27.631327  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.650599  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.747119  315363 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:33:27.752242  315363 start.go:128] duration metric: took 7.68828048s to createHost
	I1119 02:33:27.752270  315363 start.go:83] releasing machines lock for "embed-certs-168452", held for 7.688466151s
	I1119 02:33:27.752448  315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
	I1119 02:33:27.772595  315363 ssh_runner.go:195] Run: cat /version.json
	I1119 02:33:27.772634  315363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:33:27.772668  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.772695  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:27.795020  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.795311  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:27.889466  315363 ssh_runner.go:195] Run: systemctl --version
	I1119 02:33:27.948057  315363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:33:27.953270  315363 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:33:27.953328  315363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:33:27.979962  315363 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 02:33:27.979983  315363 start.go:496] detecting cgroup driver to use...
	I1119 02:33:27.980013  315363 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:33:27.980050  315363 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 02:33:27.995148  315363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 02:33:28.009176  315363 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:33:28.009239  315363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:33:28.028120  315363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:33:28.047654  315363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:33:28.137742  315363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:33:28.233503  315363 docker.go:234] disabling docker service ...
	I1119 02:33:28.233569  315363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:33:28.254546  315363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:33:28.270970  315363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:33:28.372358  315363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:33:28.475816  315363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:33:28.494447  315363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:33:28.514112  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 02:33:28.528713  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 02:33:28.542307  315363 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 02:33:28.542395  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 02:33:28.553682  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:33:28.564425  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 02:33:28.574563  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:33:28.585047  315363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:33:28.594876  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 02:33:28.606066  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 02:33:28.616549  315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 02:33:28.627283  315363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:33:28.635846  315363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:33:28.643854  315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:28.727138  315363 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 02:33:28.825075  315363 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 02:33:28.825141  315363 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 02:33:28.829886  315363 start.go:564] Will wait 60s for crictl version
	I1119 02:33:28.829954  315363 ssh_runner.go:195] Run: which crictl
	I1119 02:33:28.834062  315363 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:33:28.859386  315363 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 02:33:28.859454  315363 ssh_runner.go:195] Run: containerd --version
	I1119 02:33:28.881932  315363 ssh_runner.go:195] Run: containerd --version
	I1119 02:33:28.905418  315363 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 02:33:28.906851  315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:33:28.925576  315363 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1119 02:33:28.930043  315363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:33:28.941472  315363 kubeadm.go:884] updating cluster {Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:33:28.941570  315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:33:28.941633  315363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:33:28.969084  315363 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:33:28.969102  315363 containerd.go:534] Images already preloaded, skipping extraction
	I1119 02:33:28.969159  315363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:33:28.994529  315363 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:33:28.994549  315363 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:33:28.994556  315363 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
	I1119 02:33:28.994637  315363 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-168452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:33:28.994694  315363 ssh_runner.go:195] Run: sudo crictl info
	I1119 02:33:29.023174  315363 cni.go:84] Creating CNI manager for ""
	I1119 02:33:29.023197  315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:29.023211  315363 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:33:29.023232  315363 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-168452 NodeName:embed-certs-168452 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:33:29.023337  315363 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-168452"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:33:29.023423  315363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:33:29.032358  315363 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:33:29.032438  315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:33:29.041206  315363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1119 02:33:29.056159  315363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:33:29.074583  315363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1119 02:33:29.089316  315363 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:33:29.093854  315363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:33:29.106602  315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:29.193818  315363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:29.220027  315363 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452 for IP: 192.168.94.2
	I1119 02:33:29.220053  315363 certs.go:195] generating shared ca certs ...
	I1119 02:33:29.220075  315363 certs.go:227] acquiring lock for ca certs: {Name:mk11d6789b2333e17b3937495b501fbcca15c242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.220231  315363 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key
	I1119 02:33:29.220278  315363 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key
	I1119 02:33:29.220287  315363 certs.go:257] generating profile certs ...
	I1119 02:33:29.220334  315363 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key
	I1119 02:33:29.220351  315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt with IP's: []
	I1119 02:33:29.496773  315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt ...
	I1119 02:33:29.496800  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt: {Name:mkdb5e24f9c8b0d3d9849ba91ac24e28be0abdf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.496993  315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key ...
	I1119 02:33:29.497006  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key: {Name:mk5aa88fe9180cc5f94c07d5a968428b4ccf37cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.497088  315363 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2
	I1119 02:33:29.497102  315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	W1119 02:33:26.721525  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	W1119 02:33:29.215940  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	I1119 02:33:28.165835  307222 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:33:28.176028  307222 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:33:28.176052  307222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:33:28.195615  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:33:28.450816  307222 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:33:28.450899  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:28.450933  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-483142 minikube.k8s.io/updated_at=2025_11_19T02_33_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=no-preload-483142 minikube.k8s.io/primary=true
	I1119 02:33:28.538275  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:28.538445  307222 ops.go:34] apiserver oom_adj: -16
	I1119 02:33:29.038968  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:29.539224  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:30.038530  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:30.539271  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:31.038434  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:31.538496  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:32.038945  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:32.539001  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:33.038571  307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:33.129034  307222 kubeadm.go:1114] duration metric: took 4.678195875s to wait for elevateKubeSystemPrivileges
	I1119 02:33:33.129095  307222 kubeadm.go:403] duration metric: took 17.40558167s to StartCluster
	I1119 02:33:33.129119  307222 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:33.129202  307222 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:33:33.131182  307222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:33.131481  307222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:33:33.131519  307222 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:33:33.131585  307222 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:33:33.131706  307222 addons.go:70] Setting storage-provisioner=true in profile "no-preload-483142"
	I1119 02:33:33.131748  307222 config.go:182] Loaded profile config "no-preload-483142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:33.131794  307222 addons.go:70] Setting default-storageclass=true in profile "no-preload-483142"
	I1119 02:33:33.131827  307222 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-483142"
	I1119 02:33:33.131810  307222 addons.go:239] Setting addon storage-provisioner=true in "no-preload-483142"
	I1119 02:33:33.131959  307222 host.go:66] Checking if "no-preload-483142" exists ...
	I1119 02:33:33.132200  307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
	I1119 02:33:33.132480  307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
	I1119 02:33:33.134152  307222 out.go:179] * Verifying Kubernetes components...
	I1119 02:33:33.135585  307222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:33.159834  307222 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:33:33.160479  307222 addons.go:239] Setting addon default-storageclass=true in "no-preload-483142"
	I1119 02:33:33.160545  307222 host.go:66] Checking if "no-preload-483142" exists ...
	I1119 02:33:33.161017  307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
	I1119 02:33:33.161390  307222 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:33.161410  307222 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:33:33.161458  307222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-483142
	I1119 02:33:33.198354  307222 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:33.198390  307222 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:33:33.198448  307222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-483142
	I1119 02:33:33.198522  307222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/no-preload-483142/id_rsa Username:docker}
	I1119 02:33:33.223657  307222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/no-preload-483142/id_rsa Username:docker}
	I1119 02:33:33.248952  307222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:33:33.322673  307222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:33.348662  307222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:33.354901  307222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:33.503051  307222 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 02:33:33.504327  307222 node_ready.go:35] waiting up to 6m0s for node "no-preload-483142" to be "Ready" ...
	I1119 02:33:33.756829  307222 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:33:29.844643  315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 ...
	I1119 02:33:29.844667  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2: {Name:mk1596cf7137a998e277abf94c4c839907009a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.844872  315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2 ...
	I1119 02:33:29.844901  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2: {Name:mk9d817ab63555ebb02e0590916ce23352cf008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:29.845022  315363 certs.go:382] copying /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 -> /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt
	I1119 02:33:29.845144  315363 certs.go:386] copying /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2 -> /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key
	I1119 02:33:29.845239  315363 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key
	I1119 02:33:29.845260  315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt with IP's: []
	I1119 02:33:30.013529  315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt ...
	I1119 02:33:30.013564  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt: {Name:mka683634a30ab1845434f0fc49f75059694b447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:30.013775  315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key ...
	I1119 02:33:30.013796  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key: {Name:mk9e8dbde74fbcae5bb0e966570ae4f43c6f07e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:30.014054  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem (1338 bytes)
	W1119 02:33:30.014108  315363 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657_empty.pem, impossibly tiny 0 bytes
	I1119 02:33:30.014124  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 02:33:30.014183  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:33:30.014219  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:33:30.014257  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem (1675 bytes)
	I1119 02:33:30.014318  315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:33:30.014986  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:33:30.034798  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 02:33:30.054155  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:33:30.074272  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:33:30.094396  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 02:33:30.114605  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:33:30.133991  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:33:30.153105  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 02:33:30.172052  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /usr/share/ca-certificates/146572.pem (1708 bytes)
	I1119 02:33:30.194139  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:33:30.212546  315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem --> /usr/share/ca-certificates/14657.pem (1338 bytes)
	I1119 02:33:30.231534  315363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:33:30.246493  315363 ssh_runner.go:195] Run: openssl version
	I1119 02:33:30.252586  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146572.pem && ln -fs /usr/share/ca-certificates/146572.pem /etc/ssl/certs/146572.pem"
	I1119 02:33:30.261620  315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146572.pem
	I1119 02:33:30.265824  315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146572.pem
	I1119 02:33:30.265886  315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146572.pem
	I1119 02:33:30.301164  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146572.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:33:30.310429  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:33:30.319818  315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:33:30.323998  315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:57 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:33:30.324046  315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:33:30.360567  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:33:30.370492  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14657.pem && ln -fs /usr/share/ca-certificates/14657.pem /etc/ssl/certs/14657.pem"
	I1119 02:33:30.380695  315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14657.pem
	I1119 02:33:30.385171  315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14657.pem
	I1119 02:33:30.385241  315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14657.pem
	I1119 02:33:30.422375  315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14657.pem /etc/ssl/certs/51391683.0"
	I1119 02:33:30.432329  315363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:33:30.436333  315363 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:33:30.436432  315363 kubeadm.go:401] StartCluster: {Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:33:30.436494  315363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 02:33:30.436588  315363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:33:30.465191  315363 cri.go:89] found id: ""
	I1119 02:33:30.465255  315363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:33:30.474328  315363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:33:30.483132  315363 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:33:30.483196  315363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:33:30.491249  315363 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:33:30.491272  315363 kubeadm.go:158] found existing configuration files:
	
	I1119 02:33:30.491320  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:33:30.499072  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:33:30.499140  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:33:30.507018  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:33:30.514836  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:33:30.514890  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:33:30.523396  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:33:30.532721  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:33:30.532772  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:33:30.541409  315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:33:30.550090  315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:33:30.550157  315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:33:30.558693  315363 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:33:30.636057  315363 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 02:33:30.702518  315363 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1119 02:33:31.715333  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	W1119 02:33:33.715963  301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
	I1119 02:33:34.216972  301934 node_ready.go:49] node "old-k8s-version-691094" is "Ready"
	I1119 02:33:34.217010  301934 node_ready.go:38] duration metric: took 14.505569399s for node "old-k8s-version-691094" to be "Ready" ...
	I1119 02:33:34.217027  301934 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:33:34.217083  301934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:33:34.235995  301934 api_server.go:72] duration metric: took 14.98160502s to wait for apiserver process to appear ...
	I1119 02:33:34.236024  301934 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:33:34.236046  301934 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 02:33:34.242612  301934 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1119 02:33:34.244469  301934 api_server.go:141] control plane version: v1.28.0
	I1119 02:33:34.244501  301934 api_server.go:131] duration metric: took 8.468136ms to wait for apiserver health ...
	I1119 02:33:34.244512  301934 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:33:34.249250  301934 system_pods.go:59] 8 kube-system pods found
	I1119 02:33:34.249293  301934 system_pods.go:61] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.249301  301934 system_pods.go:61] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.249308  301934 system_pods.go:61] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.249326  301934 system_pods.go:61] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.249331  301934 system_pods.go:61] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.249336  301934 system_pods.go:61] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.249340  301934 system_pods.go:61] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.249347  301934 system_pods.go:61] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.249389  301934 system_pods.go:74] duration metric: took 4.842718ms to wait for pod list to return data ...
	I1119 02:33:34.249403  301934 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:33:34.251979  301934 default_sa.go:45] found service account: "default"
	I1119 02:33:34.252000  301934 default_sa.go:55] duration metric: took 2.59102ms for default service account to be created ...
	I1119 02:33:34.252008  301934 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:33:34.256098  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:34.256141  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.256148  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.256155  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.256158  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.256163  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.256166  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.256169  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.256173  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.256204  301934 retry.go:31] will retry after 294.08163ms: missing components: kube-dns
	I1119 02:33:34.555117  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:34.555149  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.555155  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.555160  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.555164  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.555168  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.555171  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.555174  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.555181  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.555200  301934 retry.go:31] will retry after 239.208285ms: missing components: kube-dns
	I1119 02:33:34.801314  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:34.801356  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:34.801397  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:34.801408  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:34.801414  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:34.801421  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:34.801426  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:34.801432  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:34.801446  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:34.801465  301934 retry.go:31] will retry after 406.320974ms: missing components: kube-dns
	I1119 02:33:33.758898  307222 addons.go:515] duration metric: took 627.311179ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:33:34.007122  307222 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-483142" context rescaled to 1 replicas
	W1119 02:33:35.507777  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	I1119 02:33:35.212153  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:35.212193  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:35.212202  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:35.212208  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:35.212214  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:35.212221  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:35.212226  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:35.212230  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:35.212235  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Running
	I1119 02:33:35.212252  301934 retry.go:31] will retry after 502.533324ms: missing components: kube-dns
	I1119 02:33:35.719172  301934 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:35.719211  301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Running
	I1119 02:33:35.719220  301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
	I1119 02:33:35.719225  301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
	I1119 02:33:35.719231  301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
	I1119 02:33:35.719238  301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
	I1119 02:33:35.719243  301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
	I1119 02:33:35.719248  301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
	I1119 02:33:35.719254  301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Running
	I1119 02:33:35.719267  301934 system_pods.go:126] duration metric: took 1.46725409s to wait for k8s-apps to be running ...
	I1119 02:33:35.719280  301934 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:33:35.719333  301934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:33:35.733944  301934 system_svc.go:56] duration metric: took 14.654804ms WaitForService to wait for kubelet
	I1119 02:33:35.733974  301934 kubeadm.go:587] duration metric: took 16.479589704s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:33:35.733994  301934 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:33:35.736881  301934 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:33:35.736904  301934 node_conditions.go:123] node cpu capacity is 8
	I1119 02:33:35.736917  301934 node_conditions.go:105] duration metric: took 2.917087ms to run NodePressure ...
	I1119 02:33:35.736947  301934 start.go:242] waiting for startup goroutines ...
	I1119 02:33:35.736956  301934 start.go:247] waiting for cluster config update ...
	I1119 02:33:35.736966  301934 start.go:256] writing updated cluster config ...
	I1119 02:33:35.737252  301934 ssh_runner.go:195] Run: rm -f paused
	I1119 02:33:35.741706  301934 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:33:35.746693  301934 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-bbvqz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.751796  301934 pod_ready.go:94] pod "coredns-5dd5756b68-bbvqz" is "Ready"
	I1119 02:33:35.751821  301934 pod_ready.go:86] duration metric: took 5.102077ms for pod "coredns-5dd5756b68-bbvqz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.754811  301934 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.759826  301934 pod_ready.go:94] pod "etcd-old-k8s-version-691094" is "Ready"
	I1119 02:33:35.759852  301934 pod_ready.go:86] duration metric: took 5.017899ms for pod "etcd-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.763701  301934 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.768670  301934 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-691094" is "Ready"
	I1119 02:33:35.768693  301934 pod_ready.go:86] duration metric: took 4.969901ms for pod "kube-apiserver-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:35.772227  301934 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.146684  301934 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-691094" is "Ready"
	I1119 02:33:36.146718  301934 pod_ready.go:86] duration metric: took 374.468133ms for pod "kube-controller-manager-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.347472  301934 pod_ready.go:83] waiting for pod "kube-proxy-79df5" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.746791  301934 pod_ready.go:94] pod "kube-proxy-79df5" is "Ready"
	I1119 02:33:36.746855  301934 pod_ready.go:86] duration metric: took 399.347819ms for pod "kube-proxy-79df5" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:36.946961  301934 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:37.347059  301934 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-691094" is "Ready"
	I1119 02:33:37.347090  301934 pod_ready.go:86] duration metric: took 400.10454ms for pod "kube-scheduler-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:37.347108  301934 pod_ready.go:40] duration metric: took 1.605370699s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:33:37.406793  301934 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1119 02:33:37.408685  301934 out.go:203] 
	W1119 02:33:37.410052  301934 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 02:33:37.411691  301934 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 02:33:37.413481  301934 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-691094" cluster and "default" namespace by default
	W1119 02:33:37.511440  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	W1119 02:33:40.007282  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	I1119 02:33:42.519187  315363 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:33:42.519270  315363 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:33:42.519471  315363 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:33:42.519558  315363 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 02:33:42.519641  315363 kubeadm.go:319] OS: Linux
	I1119 02:33:42.519723  315363 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:33:42.519793  315363 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:33:42.519863  315363 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:33:42.519937  315363 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:33:42.520011  315363 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:33:42.520082  315363 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:33:42.520161  315363 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:33:42.520246  315363 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 02:33:42.520396  315363 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:33:42.520528  315363 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:33:42.520640  315363 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:33:42.520739  315363 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:33:42.522619  315363 out.go:252]   - Generating certificates and keys ...
	I1119 02:33:42.522717  315363 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:33:42.522778  315363 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:33:42.522841  315363 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:33:42.522898  315363 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:33:42.522948  315363 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:33:42.522986  315363 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:33:42.523065  315363 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:33:42.523231  315363 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-168452 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1119 02:33:42.523301  315363 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:33:42.523451  315363 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-168452 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1119 02:33:42.523527  315363 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:33:42.523599  315363 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:33:42.523658  315363 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:33:42.523737  315363 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:33:42.523787  315363 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:33:42.523833  315363 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:33:42.523879  315363 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:33:42.523945  315363 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:33:42.524004  315363 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:33:42.524082  315363 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:33:42.524137  315363 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:33:42.525751  315363 out.go:252]   - Booting up control plane ...
	I1119 02:33:42.525831  315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:33:42.525893  315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:33:42.525997  315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:33:42.526121  315363 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:33:42.526235  315363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:33:42.526323  315363 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:33:42.526401  315363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:33:42.526441  315363 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:33:42.526546  315363 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:33:42.526633  315363 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:33:42.526684  315363 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001668097s
	I1119 02:33:42.526759  315363 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:33:42.526828  315363 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1119 02:33:42.526912  315363 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:33:42.526979  315363 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 02:33:42.527060  315363 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.143588684s
	I1119 02:33:42.527116  315363 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.751163591s
	I1119 02:33:42.527185  315363 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002351229s
	I1119 02:33:42.527279  315363 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:33:42.527418  315363 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:33:42.527475  315363 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:33:42.527642  315363 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-168452 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:33:42.527698  315363 kubeadm.go:319] [bootstrap-token] Using token: f9q4qi.t8dfm2zfbs2z2sgs
	I1119 02:33:42.529100  315363 out.go:252]   - Configuring RBAC rules ...
	I1119 02:33:42.529232  315363 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:33:42.529348  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:33:42.529576  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:33:42.529779  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:33:42.529949  315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:33:42.530070  315363 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:33:42.530217  315363 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:33:42.530321  315363 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:33:42.530403  315363 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:33:42.530413  315363 kubeadm.go:319] 
	I1119 02:33:42.530492  315363 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:33:42.530502  315363 kubeadm.go:319] 
	I1119 02:33:42.530604  315363 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:33:42.530618  315363 kubeadm.go:319] 
	I1119 02:33:42.530647  315363 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:33:42.530726  315363 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:33:42.530797  315363 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:33:42.530809  315363 kubeadm.go:319] 
	I1119 02:33:42.530880  315363 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:33:42.530885  315363 kubeadm.go:319] 
	I1119 02:33:42.530954  315363 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:33:42.530981  315363 kubeadm.go:319] 
	I1119 02:33:42.531052  315363 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:33:42.531164  315363 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:33:42.531261  315363 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:33:42.531271  315363 kubeadm.go:319] 
	I1119 02:33:42.531424  315363 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:33:42.531551  315363 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:33:42.531570  315363 kubeadm.go:319] 
	I1119 02:33:42.531690  315363 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token f9q4qi.t8dfm2zfbs2z2sgs \
	I1119 02:33:42.531850  315363 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a \
	I1119 02:33:42.531878  315363 kubeadm.go:319] 	--control-plane 
	I1119 02:33:42.531885  315363 kubeadm.go:319] 
	I1119 02:33:42.531966  315363 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:33:42.531972  315363 kubeadm.go:319] 
	I1119 02:33:42.532046  315363 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f9q4qi.t8dfm2zfbs2z2sgs \
	I1119 02:33:42.532149  315363 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a 
	I1119 02:33:42.532161  315363 cni.go:84] Creating CNI manager for ""
	I1119 02:33:42.532167  315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:33:42.535194  315363 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:33:42.536650  315363 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:33:42.541710  315363 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:33:42.541734  315363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:33:42.556040  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:33:42.817018  315363 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:33:42.817147  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:42.817217  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-168452 minikube.k8s.io/updated_at=2025_11_19T02_33_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=embed-certs-168452 minikube.k8s.io/primary=true
	I1119 02:33:42.828812  315363 ops.go:34] apiserver oom_adj: -16
	I1119 02:33:42.896633  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:43.396810  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:43.896801  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:44.397677  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1119 02:33:46.450455  208368 system_pods.go:55] pod list returned error: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
	I1119 02:33:46.452233  208368 out.go:203] 
	W1119 02:33:46.453522  208368 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for system pods: apiserver never returned a pod list
	W1119 02:33:46.453544  208368 out.go:285] * 
	W1119 02:33:46.455831  208368 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:33:46.457044  208368 out.go:203] 
	W1119 02:33:42.007484  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	W1119 02:33:44.007813  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	W1119 02:33:46.008192  307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
	I1119 02:33:44.897377  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:45.397137  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:45.897616  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:46.397448  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:46.896710  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:47.397632  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:47.897150  315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:33:48.003028  315363 kubeadm.go:1114] duration metric: took 5.18596901s to wait for elevateKubeSystemPrivileges
	I1119 02:33:48.003056  315363 kubeadm.go:403] duration metric: took 17.566632128s to StartCluster
	I1119 02:33:48.003071  315363 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:48.003125  315363 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:33:48.005668  315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:33:48.005964  315363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:33:48.005984  315363 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:33:48.006098  315363 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:33:48.006191  315363 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-168452"
	I1119 02:33:48.006211  315363 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-168452"
	I1119 02:33:48.006209  315363 addons.go:70] Setting default-storageclass=true in profile "embed-certs-168452"
	I1119 02:33:48.006218  315363 config.go:182] Loaded profile config "embed-certs-168452": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:33:48.006231  315363 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-168452"
	I1119 02:33:48.006249  315363 host.go:66] Checking if "embed-certs-168452" exists ...
	I1119 02:33:48.006692  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:48.006819  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:48.007901  315363 out.go:179] * Verifying Kubernetes components...
	I1119 02:33:48.009142  315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:33:48.032568  315363 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:33:48.032594  315363 addons.go:239] Setting addon default-storageclass=true in "embed-certs-168452"
	I1119 02:33:48.032649  315363 host.go:66] Checking if "embed-certs-168452" exists ...
	I1119 02:33:48.033140  315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
	I1119 02:33:48.034177  315363 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:48.034248  315363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:33:48.034332  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:48.063775  315363 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:48.063802  315363 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:33:48.063864  315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
	I1119 02:33:48.067763  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:48.088481  315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
	I1119 02:33:48.118977  315363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:33:48.181811  315363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:33:48.192106  315363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:33:48.217510  315363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:33:48.350174  315363 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1119 02:33:48.351838  315363 node_ready.go:35] waiting up to 6m0s for node "embed-certs-168452" to be "Ready" ...
	I1119 02:33:48.575859  315363 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:33:48.577031  315363 addons.go:515] duration metric: took 570.934719ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:33:48.855157  315363 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-168452" context rescaled to 1 replicas
	I1119 02:33:47.507132  307222 node_ready.go:49] node "no-preload-483142" is "Ready"
	I1119 02:33:47.507166  307222 node_ready.go:38] duration metric: took 14.002781703s for node "no-preload-483142" to be "Ready" ...
	I1119 02:33:47.507196  307222 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:33:47.507253  307222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:33:47.522586  307222 api_server.go:72] duration metric: took 14.39103106s to wait for apiserver process to appear ...
	I1119 02:33:47.522619  307222 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:33:47.522641  307222 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:33:47.526803  307222 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 02:33:47.527974  307222 api_server.go:141] control plane version: v1.34.1
	I1119 02:33:47.528002  307222 api_server.go:131] duration metric: took 5.376603ms to wait for apiserver health ...
	I1119 02:33:47.528022  307222 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:33:47.531978  307222 system_pods.go:59] 8 kube-system pods found
	I1119 02:33:47.532021  307222 system_pods.go:61] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:47.532030  307222 system_pods.go:61] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:47.532039  307222 system_pods.go:61] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:47.532046  307222 system_pods.go:61] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:47.532053  307222 system_pods.go:61] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:47.532059  307222 system_pods.go:61] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:47.532066  307222 system_pods.go:61] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:47.532078  307222 system_pods.go:61] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:47.532088  307222 system_pods.go:74] duration metric: took 4.058015ms to wait for pod list to return data ...
	I1119 02:33:47.532104  307222 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:33:47.535565  307222 default_sa.go:45] found service account: "default"
	I1119 02:33:47.535586  307222 default_sa.go:55] duration metric: took 3.475549ms for default service account to be created ...
	I1119 02:33:47.535596  307222 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:33:47.539134  307222 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:47.539173  307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:47.539181  307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:47.539188  307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:47.539192  307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:47.539196  307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:47.539204  307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:47.539210  307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:47.539215  307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:47.539249  307222 retry.go:31] will retry after 294.264342ms: missing components: kube-dns
	I1119 02:33:47.840195  307222 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:47.840235  307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:47.840244  307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:47.840253  307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:47.840257  307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:47.840262  307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:47.840267  307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:47.840272  307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:47.840288  307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:47.840308  307222 retry.go:31] will retry after 249.747879ms: missing components: kube-dns
	I1119 02:33:48.097280  307222 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:48.097316  307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:48.097322  307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:48.097331  307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:48.097336  307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:48.097342  307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:48.097346  307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:48.097350  307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:48.097356  307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:48.097389  307222 retry.go:31] will retry after 312.943754ms: missing components: kube-dns
	I1119 02:33:48.416167  307222 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:48.416224  307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:48.416233  307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:48.416242  307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:48.416249  307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:48.416265  307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:48.416285  307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:48.416290  307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:48.416304  307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:48.416338  307222 retry.go:31] will retry after 380.92269ms: missing components: kube-dns
	I1119 02:33:48.802673  307222 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:48.802712  307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Running
	I1119 02:33:48.802721  307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
	I1119 02:33:48.802726  307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
	I1119 02:33:48.802731  307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
	I1119 02:33:48.802737  307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
	I1119 02:33:48.802742  307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
	I1119 02:33:48.802755  307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
	I1119 02:33:48.802764  307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Running
	I1119 02:33:48.802775  307222 system_pods.go:126] duration metric: took 1.26717246s to wait for k8s-apps to be running ...
	I1119 02:33:48.802788  307222 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:33:48.802838  307222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:33:48.819234  307222 system_svc.go:56] duration metric: took 16.435872ms WaitForService to wait for kubelet
	I1119 02:33:48.819260  307222 kubeadm.go:587] duration metric: took 15.68771243s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:33:48.819276  307222 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:33:48.823861  307222 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:33:48.823901  307222 node_conditions.go:123] node cpu capacity is 8
	I1119 02:33:48.823924  307222 node_conditions.go:105] duration metric: took 4.642889ms to run NodePressure ...
	I1119 02:33:48.823938  307222 start.go:242] waiting for startup goroutines ...
	I1119 02:33:48.823947  307222 start.go:247] waiting for cluster config update ...
	I1119 02:33:48.823960  307222 start.go:256] writing updated cluster config ...
	I1119 02:33:48.824308  307222 ssh_runner.go:195] Run: rm -f paused
	I1119 02:33:48.829946  307222 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:33:48.834766  307222 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zgfk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.839922  307222 pod_ready.go:94] pod "coredns-66bc5c9577-zgfk9" is "Ready"
	I1119 02:33:48.839950  307222 pod_ready.go:86] duration metric: took 5.154322ms for pod "coredns-66bc5c9577-zgfk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.842702  307222 pod_ready.go:83] waiting for pod "etcd-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.848818  307222 pod_ready.go:94] pod "etcd-no-preload-483142" is "Ready"
	I1119 02:33:48.848850  307222 pod_ready.go:86] duration metric: took 6.115348ms for pod "etcd-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.851685  307222 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.856283  307222 pod_ready.go:94] pod "kube-apiserver-no-preload-483142" is "Ready"
	I1119 02:33:48.856303  307222 pod_ready.go:86] duration metric: took 4.595808ms for pod "kube-apiserver-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:48.858418  307222 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:49.235039  307222 pod_ready.go:94] pod "kube-controller-manager-no-preload-483142" is "Ready"
	I1119 02:33:49.235070  307222 pod_ready.go:86] duration metric: took 376.631643ms for pod "kube-controller-manager-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:49.435524  307222 pod_ready.go:83] waiting for pod "kube-proxy-xhrdt" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:49.834741  307222 pod_ready.go:94] pod "kube-proxy-xhrdt" is "Ready"
	I1119 02:33:49.834767  307222 pod_ready.go:86] duration metric: took 399.219221ms for pod "kube-proxy-xhrdt" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:50.035303  307222 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:50.434632  307222 pod_ready.go:94] pod "kube-scheduler-no-preload-483142" is "Ready"
	I1119 02:33:50.434662  307222 pod_ready.go:86] duration metric: took 399.329431ms for pod "kube-scheduler-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:33:50.434673  307222 pod_ready.go:40] duration metric: took 1.604675519s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:33:50.483179  307222 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:33:50.485257  307222 out.go:179] * Done! kubectl is now configured to use "no-preload-483142" cluster and "default" namespace by default
	W1119 02:33:50.355270  315363 node_ready.go:57] node "embed-certs-168452" has "Ready":"False" status (will retry)
	W1119 02:33:52.857401  315363 node_ready.go:57] node "embed-certs-168452" has "Ready":"False" status (will retry)
	W1119 02:33:55.355262  315363 node_ready.go:57] node "embed-certs-168452" has "Ready":"False" status (will retry)
	W1119 02:33:57.855402  315363 node_ready.go:57] node "embed-certs-168452" has "Ready":"False" status (will retry)
	I1119 02:33:58.855203  315363 node_ready.go:49] node "embed-certs-168452" is "Ready"
	I1119 02:33:58.855237  315363 node_ready.go:38] duration metric: took 10.503369895s for node "embed-certs-168452" to be "Ready" ...
	I1119 02:33:58.855255  315363 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:33:58.855343  315363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:33:58.869209  315363 api_server.go:72] duration metric: took 10.863154231s to wait for apiserver process to appear ...
	I1119 02:33:58.869250  315363 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:33:58.869274  315363 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 02:33:58.875569  315363 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1119 02:33:58.876575  315363 api_server.go:141] control plane version: v1.34.1
	I1119 02:33:58.876617  315363 api_server.go:131] duration metric: took 7.360045ms to wait for apiserver health ...
	I1119 02:33:58.876629  315363 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:33:58.880702  315363 system_pods.go:59] 8 kube-system pods found
	I1119 02:33:58.880740  315363 system_pods.go:61] "coredns-66bc5c9577-zjkgg" [5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:58.880760  315363 system_pods.go:61] "etcd-embed-certs-168452" [d0ec7dd4-3fea-4cb9-9409-d7580e3096e5] Running
	I1119 02:33:58.880773  315363 system_pods.go:61] "kindnet-rf6v9" [6e29d839-0594-41f7-bfd8-1f9ab66b4c86] Running
	I1119 02:33:58.880780  315363 system_pods.go:61] "kube-apiserver-embed-certs-168452" [1a173dec-e248-4772-8884-094a1416f6bc] Running
	I1119 02:33:58.880788  315363 system_pods.go:61] "kube-controller-manager-embed-certs-168452" [54a570a5-683f-435f-8ef3-801a384a4e4c] Running
	I1119 02:33:58.880793  315363 system_pods.go:61] "kube-proxy-v65n7" [edc341f0-decd-4b30-a13d-a730cb8fc47d] Running
	I1119 02:33:58.880798  315363 system_pods.go:61] "kube-scheduler-embed-certs-168452" [0547e424-6b3a-487f-94ba-a3f38ab4d102] Running
	I1119 02:33:58.880805  315363 system_pods.go:61] "storage-provisioner" [eebce997-029a-4da2-b6cd-bb0ff195ebbe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:58.880814  315363 system_pods.go:74] duration metric: took 4.173761ms to wait for pod list to return data ...
	I1119 02:33:58.880828  315363 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:33:58.888971  315363 default_sa.go:45] found service account: "default"
	I1119 02:33:58.888998  315363 default_sa.go:55] duration metric: took 8.162397ms for default service account to be created ...
	I1119 02:33:58.889023  315363 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:33:58.892650  315363 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:58.892685  315363 system_pods.go:89] "coredns-66bc5c9577-zjkgg" [5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:58.892694  315363 system_pods.go:89] "etcd-embed-certs-168452" [d0ec7dd4-3fea-4cb9-9409-d7580e3096e5] Running
	I1119 02:33:58.892703  315363 system_pods.go:89] "kindnet-rf6v9" [6e29d839-0594-41f7-bfd8-1f9ab66b4c86] Running
	I1119 02:33:58.892709  315363 system_pods.go:89] "kube-apiserver-embed-certs-168452" [1a173dec-e248-4772-8884-094a1416f6bc] Running
	I1119 02:33:58.892716  315363 system_pods.go:89] "kube-controller-manager-embed-certs-168452" [54a570a5-683f-435f-8ef3-801a384a4e4c] Running
	I1119 02:33:58.892721  315363 system_pods.go:89] "kube-proxy-v65n7" [edc341f0-decd-4b30-a13d-a730cb8fc47d] Running
	I1119 02:33:58.892726  315363 system_pods.go:89] "kube-scheduler-embed-certs-168452" [0547e424-6b3a-487f-94ba-a3f38ab4d102] Running
	I1119 02:33:58.892734  315363 system_pods.go:89] "storage-provisioner" [eebce997-029a-4da2-b6cd-bb0ff195ebbe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:58.892772  315363 retry.go:31] will retry after 264.439801ms: missing components: kube-dns
	I1119 02:33:59.162425  315363 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:59.162466  315363 system_pods.go:89] "coredns-66bc5c9577-zjkgg" [5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:59.162474  315363 system_pods.go:89] "etcd-embed-certs-168452" [d0ec7dd4-3fea-4cb9-9409-d7580e3096e5] Running
	I1119 02:33:59.162483  315363 system_pods.go:89] "kindnet-rf6v9" [6e29d839-0594-41f7-bfd8-1f9ab66b4c86] Running
	I1119 02:33:59.162488  315363 system_pods.go:89] "kube-apiserver-embed-certs-168452" [1a173dec-e248-4772-8884-094a1416f6bc] Running
	I1119 02:33:59.162494  315363 system_pods.go:89] "kube-controller-manager-embed-certs-168452" [54a570a5-683f-435f-8ef3-801a384a4e4c] Running
	I1119 02:33:59.162499  315363 system_pods.go:89] "kube-proxy-v65n7" [edc341f0-decd-4b30-a13d-a730cb8fc47d] Running
	I1119 02:33:59.162505  315363 system_pods.go:89] "kube-scheduler-embed-certs-168452" [0547e424-6b3a-487f-94ba-a3f38ab4d102] Running
	I1119 02:33:59.162512  315363 system_pods.go:89] "storage-provisioner" [eebce997-029a-4da2-b6cd-bb0ff195ebbe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:59.162533  315363 retry.go:31] will retry after 355.424259ms: missing components: kube-dns
	I1119 02:33:59.524153  315363 system_pods.go:86] 8 kube-system pods found
	I1119 02:33:59.524197  315363 system_pods.go:89] "coredns-66bc5c9577-zjkgg" [5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:33:59.524212  315363 system_pods.go:89] "etcd-embed-certs-168452" [d0ec7dd4-3fea-4cb9-9409-d7580e3096e5] Running
	I1119 02:33:59.524223  315363 system_pods.go:89] "kindnet-rf6v9" [6e29d839-0594-41f7-bfd8-1f9ab66b4c86] Running
	I1119 02:33:59.524229  315363 system_pods.go:89] "kube-apiserver-embed-certs-168452" [1a173dec-e248-4772-8884-094a1416f6bc] Running
	I1119 02:33:59.524235  315363 system_pods.go:89] "kube-controller-manager-embed-certs-168452" [54a570a5-683f-435f-8ef3-801a384a4e4c] Running
	I1119 02:33:59.524241  315363 system_pods.go:89] "kube-proxy-v65n7" [edc341f0-decd-4b30-a13d-a730cb8fc47d] Running
	I1119 02:33:59.524255  315363 system_pods.go:89] "kube-scheduler-embed-certs-168452" [0547e424-6b3a-487f-94ba-a3f38ab4d102] Running
	I1119 02:33:59.524262  315363 system_pods.go:89] "storage-provisioner" [eebce997-029a-4da2-b6cd-bb0ff195ebbe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:33:59.524283  315363 retry.go:31] will retry after 458.998162ms: missing components: kube-dns
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	be0d0f1147393       56cc512116c8f       8 seconds ago       Running             busybox                   0                   ee84bdf33f72f       busybox                                     default
	0b9c87419f31d       52546a367cc9e       13 seconds ago      Running             coredns                   0                   cc0a38a1bc6e5       coredns-66bc5c9577-zgfk9                    kube-system
	535511cf0eb8e       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   fb0f81d1477d7       storage-provisioner                         kube-system
	1cfb54b0c3a9c       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   320ef97e0948d       kindnet-6nr7d                               kube-system
	5b1ec14d6e4ff       fc25172553d79       28 seconds ago      Running             kube-proxy                0                   006f0b434dcee       kube-proxy-xhrdt                            kube-system
	970216c90257f       5f1f5298c888d       39 seconds ago      Running             etcd                      0                   cfda256d6d358       etcd-no-preload-483142                      kube-system
	0de9dc5d78d1d       c80c8dbafe7dd       39 seconds ago      Running             kube-controller-manager   0                   4ca2b31dc615a       kube-controller-manager-no-preload-483142   kube-system
	73f8e07d52017       7dd6aaa1717ab       39 seconds ago      Running             kube-scheduler            0                   944a754ca109f       kube-scheduler-no-preload-483142            kube-system
	25547ba51e3d7       c3994bc696102       39 seconds ago      Running             kube-apiserver            0                   2cecfd1a4e942       kube-apiserver-no-preload-483142            kube-system
	
	
	==> containerd <==
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.844384690Z" level=info msg="Container 0b9c87419f31d22d32fb5aa8dd18a375dbaf6ff804f443acb1a68acc4e869129: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.848915038Z" level=info msg="CreateContainer within sandbox \"fb0f81d1477d72b6ba303a44d7dcfe8f587da3c0a771c6a1c4b008777ff2fe2d\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"535511cf0eb8e278c6a97e248917117a13744f6d36d6b63bef86d79ddfc7c849\""
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.849791624Z" level=info msg="StartContainer for \"535511cf0eb8e278c6a97e248917117a13744f6d36d6b63bef86d79ddfc7c849\""
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.850804812Z" level=info msg="connecting to shim 535511cf0eb8e278c6a97e248917117a13744f6d36d6b63bef86d79ddfc7c849" address="unix:///run/containerd/s/80bc605af1512d0a1687772a5377c904b203ae314189198a2e7152d00a32fcbf" protocol=ttrpc version=3
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.855314195Z" level=info msg="CreateContainer within sandbox \"cc0a38a1bc6e52bfb86f9111b03705d31cb4f105133f7badf8cd0bad94df215a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b9c87419f31d22d32fb5aa8dd18a375dbaf6ff804f443acb1a68acc4e869129\""
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.855977424Z" level=info msg="StartContainer for \"0b9c87419f31d22d32fb5aa8dd18a375dbaf6ff804f443acb1a68acc4e869129\""
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.857103150Z" level=info msg="connecting to shim 0b9c87419f31d22d32fb5aa8dd18a375dbaf6ff804f443acb1a68acc4e869129" address="unix:///run/containerd/s/7937bacd3af16e82d3c89d95ab25bc4f9992a2378956bcfd467a31692c145a49" protocol=ttrpc version=3
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.921627759Z" level=info msg="StartContainer for \"535511cf0eb8e278c6a97e248917117a13744f6d36d6b63bef86d79ddfc7c849\" returns successfully"
	Nov 19 02:33:47 no-preload-483142 containerd[661]: time="2025-11-19T02:33:47.937579728Z" level=info msg="StartContainer for \"0b9c87419f31d22d32fb5aa8dd18a375dbaf6ff804f443acb1a68acc4e869129\" returns successfully"
	Nov 19 02:33:50 no-preload-483142 containerd[661]: time="2025-11-19T02:33:50.961421999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:90b24763-24ed-4631-9502-e0fab55d3520,Namespace:default,Attempt:0,}"
	Nov 19 02:33:51 no-preload-483142 containerd[661]: time="2025-11-19T02:33:51.009979068Z" level=info msg="connecting to shim ee84bdf33f72f612f9e552eb8c04a1415d82721b5ba15239cdcf8cba76b203d9" address="unix:///run/containerd/s/82ad45771f8515c1c380bc8b249a10a5518622c1ec1b8d7dfd54393183832080" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 02:33:51 no-preload-483142 containerd[661]: time="2025-11-19T02:33:51.084140935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:90b24763-24ed-4631-9502-e0fab55d3520,Namespace:default,Attempt:0,} returns sandbox id \"ee84bdf33f72f612f9e552eb8c04a1415d82721b5ba15239cdcf8cba76b203d9\""
	Nov 19 02:33:51 no-preload-483142 containerd[661]: time="2025-11-19T02:33:51.086242217Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.281283895Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.282507525Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396646"
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.284243638Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.286883382Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.287409188Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.201119642s"
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.287452431Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.292115595Z" level=info msg="CreateContainer within sandbox \"ee84bdf33f72f612f9e552eb8c04a1415d82721b5ba15239cdcf8cba76b203d9\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.302018693Z" level=info msg="Container be0d0f11473938dfef6cae268048f1ff3460754238c85a91abf307dfd89833a6: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.308897778Z" level=info msg="CreateContainer within sandbox \"ee84bdf33f72f612f9e552eb8c04a1415d82721b5ba15239cdcf8cba76b203d9\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"be0d0f11473938dfef6cae268048f1ff3460754238c85a91abf307dfd89833a6\""
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.309679787Z" level=info msg="StartContainer for \"be0d0f11473938dfef6cae268048f1ff3460754238c85a91abf307dfd89833a6\""
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.310775208Z" level=info msg="connecting to shim be0d0f11473938dfef6cae268048f1ff3460754238c85a91abf307dfd89833a6" address="unix:///run/containerd/s/82ad45771f8515c1c380bc8b249a10a5518622c1ec1b8d7dfd54393183832080" protocol=ttrpc version=3
	Nov 19 02:33:53 no-preload-483142 containerd[661]: time="2025-11-19T02:33:53.368521124Z" level=info msg="StartContainer for \"be0d0f11473938dfef6cae268048f1ff3460754238c85a91abf307dfd89833a6\" returns successfully"
	
	
	==> coredns [0b9c87419f31d22d32fb5aa8dd18a375dbaf6ff804f443acb1a68acc4e869129] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52829 - 49928 "HINFO IN 7672509729958589229.4000050543870758584. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021950058s
	
	
	==> describe nodes <==
	Name:               no-preload-483142
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-483142
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=no-preload-483142
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_33_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:33:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-483142
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:33:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:33:58 +0000   Wed, 19 Nov 2025 02:33:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:33:58 +0000   Wed, 19 Nov 2025 02:33:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:33:58 +0000   Wed, 19 Nov 2025 02:33:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:33:58 +0000   Wed, 19 Nov 2025 02:33:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-483142
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                91633eb1-f17c-4bd0-a804-d3558c3c2246
	  Boot ID:                    fea1659d-b751-4f87-a281-819adf52de2d
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-zgfk9                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-no-preload-483142                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-6nr7d                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-483142             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-483142    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-xhrdt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-483142             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node no-preload-483142 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node no-preload-483142 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node no-preload-483142 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node no-preload-483142 event: Registered Node no-preload-483142 in Controller
	  Normal  NodeReady                14s   kubelet          Node no-preload-483142 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[Nov19 02:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 74 0c d7 a6 53 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[ +28.680399] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 e9 7c 92 36 13 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[Nov19 02:32] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	[  +4.552839] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +11.086189] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 76 d1 26 7f 3d 08 06
	[  +0.000377] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[  +9.270754] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a2 49 fd 34 51 3b 08 06
	[  +0.000702] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +23.593864] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 86 43 5f 18 4c 08 06
	[  +0.000495] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	
	
	==> etcd [970216c90257f7f960253c399883e66d480f482f82f594baee0af9c0f9d16d2b] <==
	{"level":"info","ts":"2025-11-19T02:33:24.881949Z","caller":"traceutil/trace.go:172","msg":"trace[1606530972] transaction","detail":"{read_only:false; response_revision:24; number_of_response:1; }","duration":"268.997058ms","start":"2025-11-19T02:33:24.612811Z","end":"2025-11-19T02:33:24.881808Z","steps":["trace[1606530972] 'process raft request'  (duration: 268.346969ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:24.882312Z","caller":"traceutil/trace.go:172","msg":"trace[1846514929] transaction","detail":"{read_only:false; response_revision:30; number_of_response:1; }","duration":"268.639917ms","start":"2025-11-19T02:33:24.613581Z","end":"2025-11-19T02:33:24.882221Z","steps":["trace[1846514929] 'process raft request'  (duration: 267.932284ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:24.882343Z","caller":"traceutil/trace.go:172","msg":"trace[298036101] transaction","detail":"{read_only:false; response_revision:28; number_of_response:1; }","duration":"269.347669ms","start":"2025-11-19T02:33:24.612977Z","end":"2025-11-19T02:33:24.882325Z","steps":["trace[298036101] 'process raft request'  (duration: 268.464896ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:24.882588Z","caller":"traceutil/trace.go:172","msg":"trace[638825018] transaction","detail":"{read_only:false; response_revision:27; number_of_response:1; }","duration":"269.431675ms","start":"2025-11-19T02:33:24.612961Z","end":"2025-11-19T02:33:24.882393Z","steps":["trace[638825018] 'process raft request'  (duration: 268.442306ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:24.881604Z","caller":"traceutil/trace.go:172","msg":"trace[575219125] transaction","detail":"{read_only:false; response_revision:25; number_of_response:1; }","duration":"268.723372ms","start":"2025-11-19T02:33:24.612867Z","end":"2025-11-19T02:33:24.881590Z","steps":["trace[575219125] 'process raft request'  (duration: 268.339326ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:24.882486Z","caller":"traceutil/trace.go:172","msg":"trace[1999947741] transaction","detail":"{read_only:false; response_revision:29; number_of_response:1; }","duration":"269.452216ms","start":"2025-11-19T02:33:24.613004Z","end":"2025-11-19T02:33:24.882456Z","steps":["trace[1999947741] 'process raft request'  (duration: 268.480863ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:33:25.129994Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.707213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-19T02:33:25.130060Z","caller":"traceutil/trace.go:172","msg":"trace[415743726] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:38; }","duration":"136.776355ms","start":"2025-11-19T02:33:24.993265Z","end":"2025-11-19T02:33:25.130041Z","steps":["trace[415743726] 'agreement among raft nodes before linearized reading'  (duration: 58.062813ms)","trace[415743726] 'range keys from in-memory index tree'  (duration: 78.610329ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:33:25.130132Z","caller":"traceutil/trace.go:172","msg":"trace[1491601123] transaction","detail":"{read_only:false; response_revision:41; number_of_response:1; }","duration":"204.551424ms","start":"2025-11-19T02:33:24.925566Z","end":"2025-11-19T02:33:25.130118Z","steps":["trace[1491601123] 'process raft request'  (duration: 204.517873ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:25.130174Z","caller":"traceutil/trace.go:172","msg":"trace[174907215] transaction","detail":"{read_only:false; response_revision:39; number_of_response:1; }","duration":"227.479242ms","start":"2025-11-19T02:33:24.902675Z","end":"2025-11-19T02:33:25.130155Z","steps":["trace[174907215] 'process raft request'  (duration: 148.721051ms)","trace[174907215] 'compare'  (duration: 78.473506ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:33:25.130212Z","caller":"traceutil/trace.go:172","msg":"trace[1794256140] transaction","detail":"{read_only:false; response_revision:40; number_of_response:1; }","duration":"204.979965ms","start":"2025-11-19T02:33:24.925218Z","end":"2025-11-19T02:33:25.130198Z","steps":["trace[1794256140] 'process raft request'  (duration: 204.830067ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T02:33:25.129982Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.68032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-19T02:33:25.130286Z","caller":"traceutil/trace.go:172","msg":"trace[1015112124] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:0; response_revision:38; }","duration":"137.007799ms","start":"2025-11-19T02:33:24.993265Z","end":"2025-11-19T02:33:25.130272Z","steps":["trace[1015112124] 'agreement among raft nodes before linearized reading'  (duration: 58.06699ms)","trace[1015112124] 'range keys from in-memory index tree'  (duration: 78.571179ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:33:25.256425Z","caller":"traceutil/trace.go:172","msg":"trace[205379758] linearizableReadLoop","detail":"{readStateIndex:45; appliedIndex:45; }","duration":"123.419246ms","start":"2025-11-19T02:33:25.132983Z","end":"2025-11-19T02:33:25.256402Z","steps":["trace[205379758] 'read index received'  (duration: 123.411377ms)","trace[205379758] 'applied index is now lower than readState.Index'  (duration: 6.576µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:33:25.412798Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"279.785142ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-19T02:33:25.412876Z","caller":"traceutil/trace.go:172","msg":"trace[1977849679] range","detail":"{range_begin:/registry/clusterrolebindings; range_end:; response_count:0; response_revision:41; }","duration":"279.877345ms","start":"2025-11-19T02:33:25.132979Z","end":"2025-11-19T02:33:25.412857Z","steps":["trace[1977849679] 'agreement among raft nodes before linearized reading'  (duration: 123.49662ms)","trace[1977849679] 'range keys from in-memory index tree'  (duration: 156.242618ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:33:25.412876Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.309139ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356742132085433 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/priorityclasses/system-node-critical\" mod_revision:0 > success:<request_put:<key:\"/registry/priorityclasses/system-node-critical\" value_size:375 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-19T02:33:25.413000Z","caller":"traceutil/trace.go:172","msg":"trace[1474665022] transaction","detail":"{read_only:false; response_revision:43; number_of_response:1; }","duration":"279.992796ms","start":"2025-11-19T02:33:25.132997Z","end":"2025-11-19T02:33:25.412990Z","steps":["trace[1474665022] 'process raft request'  (duration: 279.94436ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:25.413035Z","caller":"traceutil/trace.go:172","msg":"trace[812418682] transaction","detail":"{read_only:false; response_revision:42; number_of_response:1; }","duration":"280.074687ms","start":"2025-11-19T02:33:25.132941Z","end":"2025-11-19T02:33:25.413016Z","steps":["trace[812418682] 'process raft request'  (duration: 123.578199ms)","trace[812418682] 'compare'  (duration: 156.203517ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:33:25.534017Z","caller":"traceutil/trace.go:172","msg":"trace[391540234] transaction","detail":"{read_only:false; response_revision:45; number_of_response:1; }","duration":"116.631277ms","start":"2025-11-19T02:33:25.417358Z","end":"2025-11-19T02:33:25.533990Z","steps":["trace[391540234] 'process raft request'  (duration: 108.358032ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:25.672004Z","caller":"traceutil/trace.go:172","msg":"trace[1817067666] linearizableReadLoop","detail":"{readStateIndex:53; appliedIndex:53; }","duration":"128.629837ms","start":"2025-11-19T02:33:25.543346Z","end":"2025-11-19T02:33:25.671976Z","steps":["trace[1817067666] 'read index received'  (duration: 128.620673ms)","trace[1817067666] 'applied index is now lower than readState.Index'  (duration: 7.196µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T02:33:25.719384Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"175.987356ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:discovery\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-19T02:33:25.719442Z","caller":"traceutil/trace.go:172","msg":"trace[2070525438] range","detail":"{range_begin:/registry/clusterrolebindings/system:discovery; range_end:; response_count:0; response_revision:49; }","duration":"176.085993ms","start":"2025-11-19T02:33:25.543343Z","end":"2025-11-19T02:33:25.719429Z","steps":["trace[2070525438] 'agreement among raft nodes before linearized reading'  (duration: 128.713297ms)","trace[2070525438] 'range keys from in-memory index tree'  (duration: 47.24005ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T02:33:25.719542Z","caller":"traceutil/trace.go:172","msg":"trace[398624538] transaction","detail":"{read_only:false; response_revision:51; number_of_response:1; }","duration":"176.068531ms","start":"2025-11-19T02:33:25.543461Z","end":"2025-11-19T02:33:25.719529Z","steps":["trace[398624538] 'process raft request'  (duration: 176.016482ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T02:33:25.719591Z","caller":"traceutil/trace.go:172","msg":"trace[1597272678] transaction","detail":"{read_only:false; response_revision:50; number_of_response:1; }","duration":"176.721113ms","start":"2025-11-19T02:33:25.542855Z","end":"2025-11-19T02:33:25.719576Z","steps":["trace[1597272678] 'process raft request'  (duration: 129.212648ms)","trace[1597272678] 'compare'  (duration: 47.272472ms)"],"step_count":2}
	
	
	==> kernel <==
	 02:34:01 up  1:16,  0 user,  load average: 5.37, 3.92, 2.59
	Linux no-preload-483142 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1cfb54b0c3a9c9af136708d47f32e740d3be7a3c880089823957ef677c8fe86f] <==
	I1119 02:33:36.966187       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:33:36.966484       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 02:33:36.966659       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:33:36.966677       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:33:36.966696       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:33:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:33:37.238181       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:33:37.238240       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:33:37.238257       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:33:37.238471       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:33:37.566878       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:33:37.567659       1 metrics.go:72] Registering metrics
	I1119 02:33:37.567762       1 controller.go:711] "Syncing nftables rules"
	I1119 02:33:47.239514       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 02:33:47.239597       1 main.go:301] handling current node
	I1119 02:33:57.241505       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 02:33:57.241579       1 main.go:301] handling current node
	
	
	==> kube-apiserver [25547ba51e3d7c9f5bc5ed922ef41fd7a5df8f804993f19ee0905141242cb4cf] <==
	I1119 02:33:24.195277       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:33:24.323747       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:33:24.323979       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1119 02:33:24.325495       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1119 02:33:24.532144       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:33:24.610555       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:33:24.611021       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:33:25.414182       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 02:33:25.535166       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 02:33:25.535187       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:33:26.378666       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:33:26.422647       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:33:26.503195       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 02:33:26.512716       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 02:33:26.514607       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:33:26.521465       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:33:27.188520       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:33:27.551547       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:33:27.562002       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 02:33:27.571819       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 02:33:33.033678       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 02:33:33.138799       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:33:33.144542       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:33:33.287883       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1119 02:33:58.762772       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:53672: use of closed network connection
	
	
	==> kube-controller-manager [0de9dc5d78d1d9e5fe1c4cae9915420a9f24698374794e8d118dbb18a86cb552] <==
	I1119 02:33:32.182705       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 02:33:32.188146       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:33:32.195427       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 02:33:32.204790       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 02:33:32.214008       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 02:33:32.223190       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 02:33:32.227659       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:33:32.229621       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 02:33:32.229739       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 02:33:32.231124       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 02:33:32.231161       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 02:33:32.231172       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 02:33:32.231240       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 02:33:32.231263       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 02:33:32.231342       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 02:33:32.231435       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-483142"
	I1119 02:33:32.231488       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 02:33:32.231582       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 02:33:32.231665       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 02:33:32.231742       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 02:33:32.237394       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:33:32.251888       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:33:32.251932       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:33:32.251943       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 02:33:52.255959       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5b1ec14d6e4ffc1edfbf9bb231d10fa97672c82ed93b0b16806ac5696dbc5fe3] <==
	I1119 02:33:33.746401       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:33:33.820507       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:33:33.920825       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:33:33.920865       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 02:33:33.920995       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:33:33.943531       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:33:33.943605       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:33:33.949092       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:33:33.949644       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:33:33.949679       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:33:33.951227       1 config.go:200] "Starting service config controller"
	I1119 02:33:33.951260       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:33:33.951318       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:33:33.951339       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:33:33.951433       1 config.go:309] "Starting node config controller"
	I1119 02:33:33.951441       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:33:33.951488       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:33:33.951496       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:33:34.051499       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:33:34.051519       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:33:34.051563       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 02:33:34.052891       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [73f8e07d520179cb7921a1b4c5c25d67a1e7829441086a765ef18720b414840f] <==
	E1119 02:33:24.054677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:33:24.053896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:33:24.054651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:33:24.054387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 02:33:24.054004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:33:24.921306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:33:24.956917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:33:24.987752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:33:25.004199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 02:33:25.035732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:33:25.049337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 02:33:25.100939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:33:25.148640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 02:33:25.254898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 02:33:25.266492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:33:25.406294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:33:25.422551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:33:25.444105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:33:25.531131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 02:33:25.533269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:33:25.539332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 02:33:25.592103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:33:25.601756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:33:25.601771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1119 02:33:28.350134       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:33:28 no-preload-483142 kubelet[2180]: I1119 02:33:28.477548    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-483142" podStartSLOduration=2.477521533 podStartE2EDuration="2.477521533s" podCreationTimestamp="2025-11-19 02:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:28.464881014 +0000 UTC m=+1.135253586" watchObservedRunningTime="2025-11-19 02:33:28.477521533 +0000 UTC m=+1.147894086"
	Nov 19 02:33:28 no-preload-483142 kubelet[2180]: I1119 02:33:28.492092    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-483142" podStartSLOduration=1.492064477 podStartE2EDuration="1.492064477s" podCreationTimestamp="2025-11-19 02:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:28.478516323 +0000 UTC m=+1.148888895" watchObservedRunningTime="2025-11-19 02:33:28.492064477 +0000 UTC m=+1.162437063"
	Nov 19 02:33:28 no-preload-483142 kubelet[2180]: I1119 02:33:28.502589    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-483142" podStartSLOduration=2.502568025 podStartE2EDuration="2.502568025s" podCreationTimestamp="2025-11-19 02:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:28.492333574 +0000 UTC m=+1.162706144" watchObservedRunningTime="2025-11-19 02:33:28.502568025 +0000 UTC m=+1.172940597"
	Nov 19 02:33:28 no-preload-483142 kubelet[2180]: I1119 02:33:28.515921    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-483142" podStartSLOduration=1.515895452 podStartE2EDuration="1.515895452s" podCreationTimestamp="2025-11-19 02:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:28.502753844 +0000 UTC m=+1.173126419" watchObservedRunningTime="2025-11-19 02:33:28.515895452 +0000 UTC m=+1.186268096"
	Nov 19 02:33:32 no-preload-483142 kubelet[2180]: I1119 02:33:32.178105    2180 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 02:33:32 no-preload-483142 kubelet[2180]: I1119 02:33:32.178925    2180 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 02:33:33 no-preload-483142 kubelet[2180]: I1119 02:33:33.142843    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ed3d00d-7760-4eed-af73-abf314cf5901-lib-modules\") pod \"kube-proxy-xhrdt\" (UID: \"2ed3d00d-7760-4eed-af73-abf314cf5901\") " pod="kube-system/kube-proxy-xhrdt"
	Nov 19 02:33:33 no-preload-483142 kubelet[2180]: I1119 02:33:33.142893    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6bf7df0-8af6-4156-990c-6f70cc159a8c-xtables-lock\") pod \"kindnet-6nr7d\" (UID: \"b6bf7df0-8af6-4156-990c-6f70cc159a8c\") " pod="kube-system/kindnet-6nr7d"
	Nov 19 02:33:33 no-preload-483142 kubelet[2180]: I1119 02:33:33.142927    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ed3d00d-7760-4eed-af73-abf314cf5901-kube-proxy\") pod \"kube-proxy-xhrdt\" (UID: \"2ed3d00d-7760-4eed-af73-abf314cf5901\") " pod="kube-system/kube-proxy-xhrdt"
	Nov 19 02:33:33 no-preload-483142 kubelet[2180]: I1119 02:33:33.142950    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ed3d00d-7760-4eed-af73-abf314cf5901-xtables-lock\") pod \"kube-proxy-xhrdt\" (UID: \"2ed3d00d-7760-4eed-af73-abf314cf5901\") " pod="kube-system/kube-proxy-xhrdt"
	Nov 19 02:33:33 no-preload-483142 kubelet[2180]: I1119 02:33:33.142980    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5wls\" (UniqueName: \"kubernetes.io/projected/2ed3d00d-7760-4eed-af73-abf314cf5901-kube-api-access-c5wls\") pod \"kube-proxy-xhrdt\" (UID: \"2ed3d00d-7760-4eed-af73-abf314cf5901\") " pod="kube-system/kube-proxy-xhrdt"
	Nov 19 02:33:33 no-preload-483142 kubelet[2180]: I1119 02:33:33.143004    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b6bf7df0-8af6-4156-990c-6f70cc159a8c-cni-cfg\") pod \"kindnet-6nr7d\" (UID: \"b6bf7df0-8af6-4156-990c-6f70cc159a8c\") " pod="kube-system/kindnet-6nr7d"
	Nov 19 02:33:33 no-preload-483142 kubelet[2180]: I1119 02:33:33.143030    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lvff\" (UniqueName: \"kubernetes.io/projected/b6bf7df0-8af6-4156-990c-6f70cc159a8c-kube-api-access-9lvff\") pod \"kindnet-6nr7d\" (UID: \"b6bf7df0-8af6-4156-990c-6f70cc159a8c\") " pod="kube-system/kindnet-6nr7d"
	Nov 19 02:33:33 no-preload-483142 kubelet[2180]: I1119 02:33:33.143053    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6bf7df0-8af6-4156-990c-6f70cc159a8c-lib-modules\") pod \"kindnet-6nr7d\" (UID: \"b6bf7df0-8af6-4156-990c-6f70cc159a8c\") " pod="kube-system/kindnet-6nr7d"
	Nov 19 02:33:34 no-preload-483142 kubelet[2180]: I1119 02:33:34.476734    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xhrdt" podStartSLOduration=1.476710844 podStartE2EDuration="1.476710844s" podCreationTimestamp="2025-11-19 02:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:34.476575537 +0000 UTC m=+7.146948109" watchObservedRunningTime="2025-11-19 02:33:34.476710844 +0000 UTC m=+7.147083417"
	Nov 19 02:33:37 no-preload-483142 kubelet[2180]: I1119 02:33:37.530258    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-6nr7d" podStartSLOduration=1.7262758969999998 podStartE2EDuration="4.530234192s" podCreationTimestamp="2025-11-19 02:33:33 +0000 UTC" firstStartedPulling="2025-11-19 02:33:33.85118927 +0000 UTC m=+6.521561831" lastFinishedPulling="2025-11-19 02:33:36.655147571 +0000 UTC m=+9.325520126" observedRunningTime="2025-11-19 02:33:37.512050093 +0000 UTC m=+10.182422667" watchObservedRunningTime="2025-11-19 02:33:37.530234192 +0000 UTC m=+10.200606764"
	Nov 19 02:33:47 no-preload-483142 kubelet[2180]: I1119 02:33:47.338948    2180 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 02:33:47 no-preload-483142 kubelet[2180]: I1119 02:33:47.444723    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d24a51-2fec-4ae7-852e-c65aef957597-config-volume\") pod \"coredns-66bc5c9577-zgfk9\" (UID: \"a3d24a51-2fec-4ae7-852e-c65aef957597\") " pod="kube-system/coredns-66bc5c9577-zgfk9"
	Nov 19 02:33:47 no-preload-483142 kubelet[2180]: I1119 02:33:47.444780    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcf2t\" (UniqueName: \"kubernetes.io/projected/a3d24a51-2fec-4ae7-852e-c65aef957597-kube-api-access-gcf2t\") pod \"coredns-66bc5c9577-zgfk9\" (UID: \"a3d24a51-2fec-4ae7-852e-c65aef957597\") " pod="kube-system/coredns-66bc5c9577-zgfk9"
	Nov 19 02:33:47 no-preload-483142 kubelet[2180]: I1119 02:33:47.444815    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c66a6926-3a4a-4aa9-b40b-349e1b056683-tmp\") pod \"storage-provisioner\" (UID: \"c66a6926-3a4a-4aa9-b40b-349e1b056683\") " pod="kube-system/storage-provisioner"
	Nov 19 02:33:47 no-preload-483142 kubelet[2180]: I1119 02:33:47.444844    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb749\" (UniqueName: \"kubernetes.io/projected/c66a6926-3a4a-4aa9-b40b-349e1b056683-kube-api-access-mb749\") pod \"storage-provisioner\" (UID: \"c66a6926-3a4a-4aa9-b40b-349e1b056683\") " pod="kube-system/storage-provisioner"
	Nov 19 02:33:48 no-preload-483142 kubelet[2180]: I1119 02:33:48.526946    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zgfk9" podStartSLOduration=15.526918148 podStartE2EDuration="15.526918148s" podCreationTimestamp="2025-11-19 02:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:48.51253374 +0000 UTC m=+21.182906312" watchObservedRunningTime="2025-11-19 02:33:48.526918148 +0000 UTC m=+21.197290723"
	Nov 19 02:33:50 no-preload-483142 kubelet[2180]: I1119 02:33:50.646540    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.646511446 podStartE2EDuration="17.646511446s" podCreationTimestamp="2025-11-19 02:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:48.540054968 +0000 UTC m=+21.210427551" watchObservedRunningTime="2025-11-19 02:33:50.646511446 +0000 UTC m=+23.316884019"
	Nov 19 02:33:50 no-preload-483142 kubelet[2180]: I1119 02:33:50.766532    2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgrxl\" (UniqueName: \"kubernetes.io/projected/90b24763-24ed-4631-9502-e0fab55d3520-kube-api-access-hgrxl\") pod \"busybox\" (UID: \"90b24763-24ed-4631-9502-e0fab55d3520\") " pod="default/busybox"
	Nov 19 02:33:53 no-preload-483142 kubelet[2180]: I1119 02:33:53.526359    2180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.323886287 podStartE2EDuration="3.526338888s" podCreationTimestamp="2025-11-19 02:33:50 +0000 UTC" firstStartedPulling="2025-11-19 02:33:51.085841079 +0000 UTC m=+23.756213644" lastFinishedPulling="2025-11-19 02:33:53.288293691 +0000 UTC m=+25.958666245" observedRunningTime="2025-11-19 02:33:53.526278149 +0000 UTC m=+26.196650722" watchObservedRunningTime="2025-11-19 02:33:53.526338888 +0000 UTC m=+26.196711460"
	
	
	==> storage-provisioner [535511cf0eb8e278c6a97e248917117a13744f6d36d6b63bef86d79ddfc7c849] <==
	I1119 02:33:47.928423       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:33:47.942405       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:33:47.942456       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 02:33:47.946668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:47.954182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:33:47.954499       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:33:47.954693       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-483142_ef654051-7e72-427c-b5e5-25db73824692!
	I1119 02:33:47.954591       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e5d4fb73-95d8-4c7f-b8d8-87d764024a0e", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-483142_ef654051-7e72-427c-b5e5-25db73824692 became leader
	W1119 02:33:47.961037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:47.966350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:33:48.055038       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-483142_ef654051-7e72-427c-b5e5-25db73824692!
	W1119 02:33:49.969729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:49.974968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:51.978327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:51.982416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:53.985468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:53.990011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:55.993051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:55.998141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:58.002201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:58.006778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:00.010442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:00.016713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:02.020511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:02.024906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-483142 -n no-preload-483142
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-483142 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (12.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (14.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-168452 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [21d4a418-fd63-4ac5-922d-cb793556218b] Pending
helpers_test.go:352: "busybox" [21d4a418-fd63-4ac5-922d-cb793556218b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [21d4a418-fd63-4ac5-922d-cb793556218b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005743887s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-168452 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-168452
helpers_test.go:243: (dbg) docker inspect embed-certs-168452:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "14fb37aefb5bd1cb8ec42ec109da06032662983ae02a3fa83036ce381167f905",
	        "Created": "2025-11-19T02:33:25.873238592Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 316534,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:33:25.915223704Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/14fb37aefb5bd1cb8ec42ec109da06032662983ae02a3fa83036ce381167f905/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/14fb37aefb5bd1cb8ec42ec109da06032662983ae02a3fa83036ce381167f905/hostname",
	        "HostsPath": "/var/lib/docker/containers/14fb37aefb5bd1cb8ec42ec109da06032662983ae02a3fa83036ce381167f905/hosts",
	        "LogPath": "/var/lib/docker/containers/14fb37aefb5bd1cb8ec42ec109da06032662983ae02a3fa83036ce381167f905/14fb37aefb5bd1cb8ec42ec109da06032662983ae02a3fa83036ce381167f905-json.log",
	        "Name": "/embed-certs-168452",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-168452:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-168452",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "14fb37aefb5bd1cb8ec42ec109da06032662983ae02a3fa83036ce381167f905",
	                "LowerDir": "/var/lib/docker/overlay2/2c7e61b62a859781345fc39605045f5b0ddb25e8581ee80965ef7e33e7ef9e35-init/diff:/var/lib/docker/overlay2/de7938e6a920c133c8c6b988444cfbf6706fdc6982445229ca70e2488a725edb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c7e61b62a859781345fc39605045f5b0ddb25e8581ee80965ef7e33e7ef9e35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c7e61b62a859781345fc39605045f5b0ddb25e8581ee80965ef7e33e7ef9e35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c7e61b62a859781345fc39605045f5b0ddb25e8581ee80965ef7e33e7ef9e35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-168452",
	                "Source": "/var/lib/docker/volumes/embed-certs-168452/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-168452",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-168452",
	                "name.minikube.sigs.k8s.io": "embed-certs-168452",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "679abce52d7b7add15d270d125bfda95287d1d3669dc0ecae8498e1d1004ba08",
	            "SandboxKey": "/var/run/docker/netns/679abce52d7b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-168452": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "140e12fde1bf50003f04a6d771e1153e0e129959991946ee8cd5220e8e5fd632",
	                    "EndpointID": "7fdb01e4ecdc7fdfbc3eb09964dbd1e688c65968c77d3ae3e283cdaf220f296c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "f2:15:1b:af:0f:f9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-168452",
	                        "14fb37aefb5b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-168452 -n embed-certs-168452
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-168452 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-168452 logs -n 25: (1.074383004s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-212776 sudo systemctl cat docker --no-pager                                                                                                                                                                                               │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo docker system info                                                                                                                                                                                                            │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cri-dockerd --version                                                                                                                                                                                                         │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo containerd config dump                                                                                                                                                                                                        │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo crio config                                                                                                                                                                                                                   │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ delete  │ -p bridge-212776                                                                                                                                                                                                                                    │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ start   │ -p embed-certs-168452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-168452     │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:34 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-691094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-691094 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ stop    │ -p old-k8s-version-691094 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-691094 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:34 UTC │
	│ addons  │ enable metrics-server -p no-preload-483142 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-483142      │ jenkins │ v1.37.0 │ 19 Nov 25 02:34 UTC │ 19 Nov 25 02:34 UTC │
	│ stop    │ -p no-preload-483142 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-483142      │ jenkins │ v1.37.0 │ 19 Nov 25 02:34 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-691094 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-691094 │ jenkins │ v1.37.0 │ 19 Nov 25 02:34 UTC │ 19 Nov 25 02:34 UTC │
	│ start   │ -p old-k8s-version-691094 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-691094 │ jenkins │ v1.37.0 │ 19 Nov 25 02:34 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:34:04
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:34:04.919150  324696 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:34:04.919282  324696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:34:04.919287  324696 out.go:374] Setting ErrFile to fd 2...
	I1119 02:34:04.919291  324696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:34:04.919526  324696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:34:04.919996  324696 out.go:368] Setting JSON to false
	I1119 02:34:04.921305  324696 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4585,"bootTime":1763515060,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:34:04.921427  324696 start.go:143] virtualization: kvm guest
	I1119 02:34:04.923797  324696 out.go:179] * [old-k8s-version-691094] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:34:04.925360  324696 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:34:04.925400  324696 notify.go:221] Checking for updates...
	I1119 02:34:04.928271  324696 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:34:04.929908  324696 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:34:04.931330  324696 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	I1119 02:34:04.932627  324696 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:34:04.934014  324696 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:34:04.936017  324696 config.go:182] Loaded profile config "old-k8s-version-691094": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 02:34:04.937743  324696 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1119 02:34:04.938948  324696 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:34:04.966788  324696 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:34:04.966946  324696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:34:05.032906  324696 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:34:05.020949459 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:34:05.033104  324696 docker.go:319] overlay module found
	I1119 02:34:05.035095  324696 out.go:179] * Using the docker driver based on existing profile
	I1119 02:34:05.036228  324696 start.go:309] selected driver: docker
	I1119 02:34:05.036251  324696 start.go:930] validating driver "docker" against &{Name:old-k8s-version-691094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-691094 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountSt
ring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:34:05.036360  324696 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:34:05.037062  324696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:34:05.094450  324696 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-19 02:34:05.084491917 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:34:05.094705  324696 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:34:05.094732  324696 cni.go:84] Creating CNI manager for ""
	I1119 02:34:05.094775  324696 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:34:05.094801  324696 start.go:353] cluster config:
	{Name:old-k8s-version-691094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-691094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:34:05.096713  324696 out.go:179] * Starting "old-k8s-version-691094" primary control-plane node in "old-k8s-version-691094" cluster
	I1119 02:34:05.097833  324696 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 02:34:05.098936  324696 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:34:05.100075  324696 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1119 02:34:05.100110  324696 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1119 02:34:05.100119  324696 cache.go:65] Caching tarball of preloaded images
	I1119 02:34:05.100190  324696 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:34:05.100202  324696 preload.go:238] Found /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1119 02:34:05.100211  324696 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1119 02:34:05.100314  324696 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/config.json ...
	I1119 02:34:05.120962  324696 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:34:05.120979  324696 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:34:05.120995  324696 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:34:05.121027  324696 start.go:360] acquireMachinesLock for old-k8s-version-691094: {Name:mkfb000600dc66dbf8c170048dfbe67bdac66bf9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:34:05.121100  324696 start.go:364] duration metric: took 40.433µs to acquireMachinesLock for "old-k8s-version-691094"
	I1119 02:34:05.121123  324696 start.go:96] Skipping create...Using existing machine configuration
	I1119 02:34:05.121133  324696 fix.go:54] fixHost starting: 
	I1119 02:34:05.121390  324696 cli_runner.go:164] Run: docker container inspect old-k8s-version-691094 --format={{.State.Status}}
	I1119 02:34:05.139585  324696 fix.go:112] recreateIfNeeded on old-k8s-version-691094: state=Stopped err=<nil>
	W1119 02:34:05.139615  324696 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 02:34:05.141570  324696 out.go:252] * Restarting existing docker container for "old-k8s-version-691094" ...
	I1119 02:34:05.141642  324696 cli_runner.go:164] Run: docker start old-k8s-version-691094
	I1119 02:34:05.423915  324696 cli_runner.go:164] Run: docker container inspect old-k8s-version-691094 --format={{.State.Status}}
	I1119 02:34:05.443823  324696 kic.go:430] container "old-k8s-version-691094" state is running.
	I1119 02:34:05.444247  324696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-691094
	I1119 02:34:05.464675  324696 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/config.json ...
	I1119 02:34:05.464891  324696 machine.go:94] provisionDockerMachine start ...
	I1119 02:34:05.464954  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:05.484472  324696 main.go:143] libmachine: Using SSH client type: native
	I1119 02:34:05.484779  324696 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1119 02:34:05.484794  324696 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:34:05.485567  324696 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57472->127.0.0.1:33110: read: connection reset by peer
	I1119 02:34:08.621350  324696 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-691094
	
	I1119 02:34:08.621417  324696 ubuntu.go:182] provisioning hostname "old-k8s-version-691094"
	I1119 02:34:08.621483  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:08.639845  324696 main.go:143] libmachine: Using SSH client type: native
	I1119 02:34:08.640137  324696 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1119 02:34:08.640158  324696 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-691094 && echo "old-k8s-version-691094" | sudo tee /etc/hostname
	I1119 02:34:08.781521  324696 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-691094
	
	I1119 02:34:08.781605  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:08.801090  324696 main.go:143] libmachine: Using SSH client type: native
	I1119 02:34:08.801287  324696 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1119 02:34:08.801303  324696 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-691094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-691094/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-691094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:34:08.935253  324696 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:34:08.935285  324696 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11107/.minikube}
	I1119 02:34:08.935311  324696 ubuntu.go:190] setting up certificates
	I1119 02:34:08.935321  324696 provision.go:84] configureAuth start
	I1119 02:34:08.935410  324696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-691094
	I1119 02:34:08.954025  324696 provision.go:143] copyHostCerts
	I1119 02:34:08.954084  324696 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem, removing ...
	I1119 02:34:08.954095  324696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem
	I1119 02:34:08.954168  324696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem (1082 bytes)
	I1119 02:34:08.954269  324696 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem, removing ...
	I1119 02:34:08.954277  324696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem
	I1119 02:34:08.954313  324696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem (1123 bytes)
	I1119 02:34:08.954408  324696 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem, removing ...
	I1119 02:34:08.954419  324696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem
	I1119 02:34:08.954455  324696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem (1675 bytes)
	I1119 02:34:08.954531  324696 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-691094 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-691094]
	I1119 02:34:09.259240  324696 provision.go:177] copyRemoteCerts
	I1119 02:34:09.259314  324696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:34:09.259356  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:09.277923  324696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:34:09.374190  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:34:09.393265  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1119 02:34:09.411935  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:34:09.432717  324696 provision.go:87] duration metric: took 497.361291ms to configureAuth
	I1119 02:34:09.432754  324696 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:34:09.432966  324696 config.go:182] Loaded profile config "old-k8s-version-691094": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 02:34:09.432979  324696 machine.go:97] duration metric: took 3.968076168s to provisionDockerMachine
	I1119 02:34:09.432987  324696 start.go:293] postStartSetup for "old-k8s-version-691094" (driver="docker")
	I1119 02:34:09.432998  324696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:34:09.433047  324696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:34:09.433079  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:09.452076  324696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:34:09.549428  324696 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:34:09.554004  324696 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:34:09.554030  324696 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:34:09.554039  324696 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/addons for local assets ...
	I1119 02:34:09.554091  324696 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/files for local assets ...
	I1119 02:34:09.554175  324696 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem -> 146572.pem in /etc/ssl/certs
	I1119 02:34:09.554302  324696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:34:09.562493  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:34:09.580517  324696 start.go:296] duration metric: took 147.513177ms for postStartSetup
	I1119 02:34:09.580594  324696 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:34:09.580640  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:09.599654  324696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:34:09.692833  324696 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:34:09.697541  324696 fix.go:56] duration metric: took 4.576402301s for fixHost
	I1119 02:34:09.697567  324696 start.go:83] releasing machines lock for "old-k8s-version-691094", held for 4.576454368s
	I1119 02:34:09.697662  324696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-691094
	I1119 02:34:09.716331  324696 ssh_runner.go:195] Run: cat /version.json
	I1119 02:34:09.716402  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:09.716409  324696 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:34:09.716490  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:09.736675  324696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:34:09.738061  324696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:34:09.829692  324696 ssh_runner.go:195] Run: systemctl --version
	I1119 02:34:09.887465  324696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:34:09.892578  324696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:34:09.892646  324696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:34:09.901399  324696 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 02:34:09.901428  324696 start.go:496] detecting cgroup driver to use...
	I1119 02:34:09.901458  324696 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:34:09.901495  324696 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 02:34:09.919955  324696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 02:34:09.934028  324696 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:34:09.934084  324696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:34:09.949634  324696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:34:09.962553  324696 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:34:10.049637  324696 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:34:10.129412  324696 docker.go:234] disabling docker service ...
	I1119 02:34:10.129481  324696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:34:10.144428  324696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:34:10.157554  324696 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:34:10.239317  324696 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:34:10.316126  324696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:34:10.329220  324696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:34:10.344358  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1119 02:34:10.353528  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 02:34:10.362893  324696 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 02:34:10.362961  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 02:34:10.372376  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:34:10.381551  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 02:34:10.391045  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:34:10.400284  324696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:34:10.408739  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 02:34:10.417833  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 02:34:10.426785  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 02:34:10.436234  324696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:34:10.443826  324696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:34:10.451672  324696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:34:10.532639  324696 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 02:34:10.644692  324696 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 02:34:10.644752  324696 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 02:34:10.648866  324696 start.go:564] Will wait 60s for crictl version
	I1119 02:34:10.648925  324696 ssh_runner.go:195] Run: which crictl
	I1119 02:34:10.652785  324696 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:34:10.678515  324696 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 02:34:10.678580  324696 ssh_runner.go:195] Run: containerd --version
	I1119 02:34:10.699600  324696 ssh_runner.go:195] Run: containerd --version
	I1119 02:34:10.721969  324696 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1119 02:34:10.723273  324696 cli_runner.go:164] Run: docker network inspect old-k8s-version-691094 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:34:10.742836  324696 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1119 02:34:10.746987  324696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:34:10.757193  324696 kubeadm.go:884] updating cluster {Name:old-k8s-version-691094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-691094 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:34:10.757295  324696 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1119 02:34:10.757339  324696 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:34:10.783775  324696 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:34:10.783796  324696 containerd.go:534] Images already preloaded, skipping extraction
	I1119 02:34:10.783845  324696 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:34:10.809574  324696 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:34:10.809594  324696 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:34:10.809602  324696 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 containerd true true} ...
	I1119 02:34:10.809703  324696 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-691094 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-691094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:34:10.809764  324696 ssh_runner.go:195] Run: sudo crictl info
	I1119 02:34:10.837391  324696 cni.go:84] Creating CNI manager for ""
	I1119 02:34:10.837414  324696 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:34:10.837430  324696 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:34:10.837448  324696 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-691094 NodeName:old-k8s-version-691094 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:34:10.837570  324696 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-691094"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:34:10.837624  324696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1119 02:34:10.846499  324696 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:34:10.846567  324696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:34:10.854749  324696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1119 02:34:10.867992  324696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:34:10.880458  324696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I1119 02:34:10.893442  324696 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:34:10.897416  324696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:34:10.907534  324696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:34:10.989244  324696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:34:11.016308  324696 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094 for IP: 192.168.103.2
	I1119 02:34:11.016329  324696 certs.go:195] generating shared ca certs ...
	I1119 02:34:11.016347  324696 certs.go:227] acquiring lock for ca certs: {Name:mk11d6789b2333e17b3937495b501fbcca15c242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:34:11.016511  324696 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key
	I1119 02:34:11.016589  324696 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key
	I1119 02:34:11.016608  324696 certs.go:257] generating profile certs ...
	I1119 02:34:11.016704  324696 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/client.key
	I1119 02:34:11.016754  324696 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/apiserver.key.f11a8208
	I1119 02:34:11.016788  324696 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/proxy-client.key
	I1119 02:34:11.016891  324696 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem (1338 bytes)
	W1119 02:34:11.016918  324696 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657_empty.pem, impossibly tiny 0 bytes
	I1119 02:34:11.016926  324696 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 02:34:11.016954  324696 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:34:11.016981  324696 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:34:11.017012  324696 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem (1675 bytes)
	I1119 02:34:11.017069  324696 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:34:11.017776  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:34:11.037349  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 02:34:11.057607  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:34:11.076555  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:34:11.100260  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1119 02:34:11.123800  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 02:34:11.144085  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:34:11.163622  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:34:11.182393  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:34:11.200995  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem --> /usr/share/ca-certificates/14657.pem (1338 bytes)
	I1119 02:34:11.221567  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /usr/share/ca-certificates/146572.pem (1708 bytes)
	I1119 02:34:11.239423  324696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:34:11.254012  324696 ssh_runner.go:195] Run: openssl version
	I1119 02:34:11.260448  324696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146572.pem && ln -fs /usr/share/ca-certificates/146572.pem /etc/ssl/certs/146572.pem"
	I1119 02:34:11.269712  324696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146572.pem
	I1119 02:34:11.273689  324696 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146572.pem
	I1119 02:34:11.273747  324696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146572.pem
	I1119 02:34:11.309318  324696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146572.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:34:11.318183  324696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:34:11.326691  324696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:34:11.330473  324696 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:57 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:34:11.330521  324696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:34:11.366523  324696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:34:11.375398  324696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14657.pem && ln -fs /usr/share/ca-certificates/14657.pem /etc/ssl/certs/14657.pem"
	I1119 02:34:11.384554  324696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14657.pem
	I1119 02:34:11.388814  324696 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14657.pem
	I1119 02:34:11.388877  324696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14657.pem
	I1119 02:34:11.425041  324696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14657.pem /etc/ssl/certs/51391683.0"
	I1119 02:34:11.435060  324696 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:34:11.439312  324696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 02:34:11.474087  324696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 02:34:11.508880  324696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 02:34:11.543396  324696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 02:34:11.588583  324696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 02:34:11.638230  324696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 02:34:11.693651  324696 kubeadm.go:401] StartCluster: {Name:old-k8s-version-691094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-691094 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:34:11.693762  324696 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 02:34:11.693825  324696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:34:11.727661  324696 cri.go:89] found id: "c1615f8b7603bea728b5006e32d7828f14c61d090c32205f627734bd31dbfc55"
	I1119 02:34:11.727692  324696 cri.go:89] found id: "64f7107f2fd5d904d3ca02f88e2104c599bec7b13a530829ee0c761e554b6557"
	I1119 02:34:11.727698  324696 cri.go:89] found id: "9cc7faf54132ee2da8525106f2f0da439e1d48a41206667ecc42c4add5564fcc"
	I1119 02:34:11.727702  324696 cri.go:89] found id: "1e139eec825de9114abd6701b9ab42ee2b8ab9b766ece6ead08550a8ad647722"
	I1119 02:34:11.727705  324696 cri.go:89] found id: "e773989cb5b9719f34c18e9670a458f821a72e3b0c1f48c1667978ae16fa12a4"
	I1119 02:34:11.727709  324696 cri.go:89] found id: "dda3cde60adcefe6dc905f202c5021fdb56f1c94c37adce1fdae5c18d6080acc"
	I1119 02:34:11.727713  324696 cri.go:89] found id: "5dde09d6b5534707795709157ee81edeb05e31172278aaf5526347ba15edf149"
	I1119 02:34:11.727716  324696 cri.go:89] found id: "ae40aa345e79cbe278439afee2a5038c48c1ac05f3405d97259e5af73e3fbf92"
	I1119 02:34:11.727723  324696 cri.go:89] found id: "b77b79fa6a466aa3e18c8bd7eba3c607337982e750126d443bc923b253db1773"
	I1119 02:34:11.727736  324696 cri.go:89] found id: "dbc14fc0cc43a9945343d07a4033d270d1157c5a3b861d1386847247f42a1497"
	I1119 02:34:11.727741  324696 cri.go:89] found id: "2710c5af3eee6491ef45de25344cda5fa8a6bddc3604a03908e7ec36cc3ec259"
	I1119 02:34:11.727747  324696 cri.go:89] found id: ""
	I1119 02:34:11.727797  324696 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1119 02:34:11.757594  324696 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"0d529a5e629230c31fb73b3d2ed3cfce798ecc4d524c6ad341c7d6de85135800","pid":815,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d529a5e629230c31fb73b3d2ed3cfce798ecc4d524c6ad341c7d6de85135800","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d529a5e629230c31fb73b3d2ed3cfce798ecc4d524c6ad341c7d6de85135800/rootfs","created":"2025-11-19T02:34:11.619427608Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"0d529a5e629230c31fb73b3d2ed3cfce798ecc4d524c6ad341c7d6de85135800","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-691094_43c6e24af58f4532899857c154187af1","io.kubernetes.cri.sandbox-mem
ory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-old-k8s-version-691094","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"43c6e24af58f4532899857c154187af1"},"owner":"root"},{"ociVersion":"1.2.1","id":"35d29561c1f5d9397b4944b18bf98459d3454b169cc0c4ebd658a27a3500efef","pid":868,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35d29561c1f5d9397b4944b18bf98459d3454b169cc0c4ebd658a27a3500efef","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35d29561c1f5d9397b4944b18bf98459d3454b169cc0c4ebd658a27a3500efef/rootfs","created":"2025-11-19T02:34:11.659902181Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"35d29561c1f5d9397b4944b18bf98459d3454b169cc0c4ebd658a27a3500efef","io.kubernet
es.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-691094_84ef4fdb4d0c12a012863d9b76078617","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-old-k8s-version-691094","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"84ef4fdb4d0c12a012863d9b76078617"},"owner":"root"},{"ociVersion":"1.2.1","id":"64f7107f2fd5d904d3ca02f88e2104c599bec7b13a530829ee0c761e554b6557","pid":939,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/64f7107f2fd5d904d3ca02f88e2104c599bec7b13a530829ee0c761e554b6557","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/64f7107f2fd5d904d3ca02f88e2104c599bec7b13a530829ee0c761e554b6557/rootfs","created":"2025-11-19T02:34:11.738650447Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"
723d9c424dadf8757aed6fa281d70acfbc64ff1356e77d613f78aee773d061b5","io.kubernetes.cri.sandbox-name":"etcd-old-k8s-version-691094","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4f725102ec78b74a198d6aa56d892f56"},"owner":"root"},{"ociVersion":"1.2.1","id":"723d9c424dadf8757aed6fa281d70acfbc64ff1356e77d613f78aee773d061b5","pid":833,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/723d9c424dadf8757aed6fa281d70acfbc64ff1356e77d613f78aee773d061b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/723d9c424dadf8757aed6fa281d70acfbc64ff1356e77d613f78aee773d061b5/rootfs","created":"2025-11-19T02:34:11.622200095Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"723d9c424dadf8757aed6fa281d70ac
fbc64ff1356e77d613f78aee773d061b5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8s-version-691094_4f725102ec78b74a198d6aa56d892f56","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-old-k8s-version-691094","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4f725102ec78b74a198d6aa56d892f56"},"owner":"root"},{"ociVersion":"1.2.1","id":"769c8020ea41c57ba98f823e94ac992c01cb7b712588250eb6d55cb0c50b7475","pid":870,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/769c8020ea41c57ba98f823e94ac992c01cb7b712588250eb6d55cb0c50b7475","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/769c8020ea41c57ba98f823e94ac992c01cb7b712588250eb6d55cb0c50b7475/rootfs","created":"2025-11-19T02:34:11.65982413Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernet
es.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"769c8020ea41c57ba98f823e94ac992c01cb7b712588250eb6d55cb0c50b7475","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-691094_4bd64ce31c0d565619382fafb2d03a51","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-old-k8s-version-691094","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4bd64ce31c0d565619382fafb2d03a51"},"owner":"root"},{"ociVersion":"1.2.1","id":"927b1e80c6867d0792be216dabd30704cdb03840f55b022c96a6e4f5b0fe51e5","pid":978,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/927b1e80c6867d0792be216dabd30704cdb03840f55b022c96a6e4f5b0fe51e5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/927b1e80c6867d0792be216dabd30704cdb03840f55b022c96a6e4f5b0fe51e5/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.contain
er-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.0","io.kubernetes.cri.sandbox-id":"35d29561c1f5d9397b4944b18bf98459d3454b169cc0c4ebd658a27a3500efef","io.kubernetes.cri.sandbox-name":"kube-controller-manager-old-k8s-version-691094","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"84ef4fdb4d0c12a012863d9b76078617"},"owner":"root"},{"ociVersion":"1.2.1","id":"9cc7faf54132ee2da8525106f2f0da439e1d48a41206667ecc42c4add5564fcc","pid":941,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cc7faf54132ee2da8525106f2f0da439e1d48a41206667ecc42c4add5564fcc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cc7faf54132ee2da8525106f2f0da439e1d48a41206667ecc42c4add5564fcc/rootfs","created":"2025-11-19T02:34:11.750591758Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container",
"io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.0","io.kubernetes.cri.sandbox-id":"0d529a5e629230c31fb73b3d2ed3cfce798ecc4d524c6ad341c7d6de85135800","io.kubernetes.cri.sandbox-name":"kube-apiserver-old-k8s-version-691094","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"43c6e24af58f4532899857c154187af1"},"owner":"root"},{"ociVersion":"1.2.1","id":"c1615f8b7603bea728b5006e32d7828f14c61d090c32205f627734bd31dbfc55","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1615f8b7603bea728b5006e32d7828f14c61d090c32205f627734bd31dbfc55","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1615f8b7603bea728b5006e32d7828f14c61d090c32205f627734bd31dbfc55/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.0","io.kubernetes.cri.sandbox-id":"769c8020
ea41c57ba98f823e94ac992c01cb7b712588250eb6d55cb0c50b7475","io.kubernetes.cri.sandbox-name":"kube-scheduler-old-k8s-version-691094","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4bd64ce31c0d565619382fafb2d03a51"},"owner":"root"}]
	I1119 02:34:11.757851  324696 cri.go:126] list returned 8 containers
	I1119 02:34:11.757869  324696 cri.go:129] container: {ID:0d529a5e629230c31fb73b3d2ed3cfce798ecc4d524c6ad341c7d6de85135800 Status:running}
	I1119 02:34:11.757927  324696 cri.go:131] skipping 0d529a5e629230c31fb73b3d2ed3cfce798ecc4d524c6ad341c7d6de85135800 - not in ps
	I1119 02:34:11.757935  324696 cri.go:129] container: {ID:35d29561c1f5d9397b4944b18bf98459d3454b169cc0c4ebd658a27a3500efef Status:running}
	I1119 02:34:11.757941  324696 cri.go:131] skipping 35d29561c1f5d9397b4944b18bf98459d3454b169cc0c4ebd658a27a3500efef - not in ps
	I1119 02:34:11.757945  324696 cri.go:129] container: {ID:64f7107f2fd5d904d3ca02f88e2104c599bec7b13a530829ee0c761e554b6557 Status:created}
	I1119 02:34:11.757955  324696 cri.go:135] skipping {64f7107f2fd5d904d3ca02f88e2104c599bec7b13a530829ee0c761e554b6557 created}: state = "created", want "paused"
	I1119 02:34:11.757966  324696 cri.go:129] container: {ID:723d9c424dadf8757aed6fa281d70acfbc64ff1356e77d613f78aee773d061b5 Status:running}
	I1119 02:34:11.757972  324696 cri.go:131] skipping 723d9c424dadf8757aed6fa281d70acfbc64ff1356e77d613f78aee773d061b5 - not in ps
	I1119 02:34:11.757977  324696 cri.go:129] container: {ID:769c8020ea41c57ba98f823e94ac992c01cb7b712588250eb6d55cb0c50b7475 Status:running}
	I1119 02:34:11.757983  324696 cri.go:131] skipping 769c8020ea41c57ba98f823e94ac992c01cb7b712588250eb6d55cb0c50b7475 - not in ps
	I1119 02:34:11.757987  324696 cri.go:129] container: {ID:927b1e80c6867d0792be216dabd30704cdb03840f55b022c96a6e4f5b0fe51e5 Status:created}
	I1119 02:34:11.757992  324696 cri.go:131] skipping 927b1e80c6867d0792be216dabd30704cdb03840f55b022c96a6e4f5b0fe51e5 - not in ps
	I1119 02:34:11.757997  324696 cri.go:129] container: {ID:9cc7faf54132ee2da8525106f2f0da439e1d48a41206667ecc42c4add5564fcc Status:created}
	I1119 02:34:11.758006  324696 cri.go:135] skipping {9cc7faf54132ee2da8525106f2f0da439e1d48a41206667ecc42c4add5564fcc created}: state = "created", want "paused"
	I1119 02:34:11.758012  324696 cri.go:129] container: {ID:c1615f8b7603bea728b5006e32d7828f14c61d090c32205f627734bd31dbfc55 Status:stopped}
	I1119 02:34:11.758018  324696 cri.go:135] skipping {c1615f8b7603bea728b5006e32d7828f14c61d090c32205f627734bd31dbfc55 stopped}: state = "stopped", want "paused"
	I1119 02:34:11.758070  324696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:34:11.769466  324696 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 02:34:11.769502  324696 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 02:34:11.769563  324696 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 02:34:11.783665  324696 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:34:11.785268  324696 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-691094" does not appear in /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:34:11.786176  324696 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-11107/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-691094" cluster setting kubeconfig missing "old-k8s-version-691094" context setting]
	I1119 02:34:11.787491  324696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:34:11.789618  324696 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 02:34:11.803032  324696 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1119 02:34:11.803084  324696 kubeadm.go:602] duration metric: took 33.570577ms to restartPrimaryControlPlane
	I1119 02:34:11.803096  324696 kubeadm.go:403] duration metric: took 109.455267ms to StartCluster
	I1119 02:34:11.803117  324696 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:34:11.803175  324696 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:34:11.805728  324696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:34:11.806207  324696 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:34:11.806469  324696 config.go:182] Loaded profile config "old-k8s-version-691094": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 02:34:11.806484  324696 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:34:11.806986  324696 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-691094"
	I1119 02:34:11.807004  324696 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-691094"
	W1119 02:34:11.807012  324696 addons.go:248] addon storage-provisioner should already be in state true
	I1119 02:34:11.807040  324696 host.go:66] Checking if "old-k8s-version-691094" exists ...
	I1119 02:34:11.807558  324696 cli_runner.go:164] Run: docker container inspect old-k8s-version-691094 --format={{.State.Status}}
	I1119 02:34:11.807606  324696 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-691094"
	I1119 02:34:11.807632  324696 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-691094"
	I1119 02:34:11.807886  324696 addons.go:70] Setting metrics-server=true in profile "old-k8s-version-691094"
	I1119 02:34:11.807914  324696 addons.go:239] Setting addon metrics-server=true in "old-k8s-version-691094"
	W1119 02:34:11.807923  324696 addons.go:248] addon metrics-server should already be in state true
	I1119 02:34:11.807946  324696 cli_runner.go:164] Run: docker container inspect old-k8s-version-691094 --format={{.State.Status}}
	I1119 02:34:11.807975  324696 host.go:66] Checking if "old-k8s-version-691094" exists ...
	I1119 02:34:11.808120  324696 addons.go:70] Setting dashboard=true in profile "old-k8s-version-691094"
	I1119 02:34:11.808138  324696 addons.go:239] Setting addon dashboard=true in "old-k8s-version-691094"
	W1119 02:34:11.808146  324696 addons.go:248] addon dashboard should already be in state true
	I1119 02:34:11.808173  324696 host.go:66] Checking if "old-k8s-version-691094" exists ...
	I1119 02:34:11.808693  324696 cli_runner.go:164] Run: docker container inspect old-k8s-version-691094 --format={{.State.Status}}
	I1119 02:34:11.808736  324696 cli_runner.go:164] Run: docker container inspect old-k8s-version-691094 --format={{.State.Status}}
	I1119 02:34:11.812479  324696 out.go:179] * Verifying Kubernetes components...
	I1119 02:34:11.814892  324696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:34:11.844361  324696 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1119 02:34:11.844722  324696 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:34:11.845944  324696 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1119 02:34:11.845969  324696 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1119 02:34:11.846023  324696 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:34:11.846050  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:11.846051  324696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:34:11.846155  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:11.852414  324696 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-691094"
	W1119 02:34:11.852444  324696 addons.go:248] addon default-storageclass should already be in state true
	I1119 02:34:11.852476  324696 host.go:66] Checking if "old-k8s-version-691094" exists ...
	I1119 02:34:11.852959  324696 cli_runner.go:164] Run: docker container inspect old-k8s-version-691094 --format={{.State.Status}}
	I1119 02:34:11.867693  324696 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 02:34:11.869830  324696 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	ab00038d157e9       56cc512116c8f       8 seconds ago       Running             busybox                   0                   59ab2bcb83dd0       busybox                                      default
	efc94048807eb       52546a367cc9e       13 seconds ago      Running             coredns                   0                   4de395773869c       coredns-66bc5c9577-zjkgg                     kube-system
	07248d3fa7700       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   d6f91d217e6c3       storage-provisioner                          kube-system
	627aeabcbd8b9       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   4d28a48ddf233       kindnet-rf6v9                                kube-system
	6a6b45ccc3386       fc25172553d79       25 seconds ago      Running             kube-proxy                0                   7f4f8e38d760a       kube-proxy-v65n7                             kube-system
	c41f3c5163fef       5f1f5298c888d       35 seconds ago      Running             etcd                      0                   f1bc2fcd4c787       etcd-embed-certs-168452                      kube-system
	429108bffb6c2       c3994bc696102       35 seconds ago      Running             kube-apiserver            0                   855b5a64cf042       kube-apiserver-embed-certs-168452            kube-system
	45e16dd5855dc       c80c8dbafe7dd       35 seconds ago      Running             kube-controller-manager   0                   fc384d699a6fe       kube-controller-manager-embed-certs-168452   kube-system
	16ff9f0719734       7dd6aaa1717ab       35 seconds ago      Running             kube-scheduler            0                   ce3809f0d4fbd       kube-scheduler-embed-certs-168452            kube-system
	
	
	==> containerd <==
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.270548100Z" level=info msg="Container efc94048807ebbcf329446ac8949c16a7392055b208364f9e03d0eae632bb378: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.270833598Z" level=info msg="CreateContainer within sandbox \"d6f91d217e6c3552f3a69fa0623e507a5e67784dd771bed97dd1d06401b4bfd3\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"07248d3fa7700df350c76db3b67044ace82560729e01191bc23c841731cd3cfa\""
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.271491612Z" level=info msg="StartContainer for \"07248d3fa7700df350c76db3b67044ace82560729e01191bc23c841731cd3cfa\""
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.272595505Z" level=info msg="connecting to shim 07248d3fa7700df350c76db3b67044ace82560729e01191bc23c841731cd3cfa" address="unix:///run/containerd/s/41d78d767206b328aff4597f7c602e19da062a2cdbdc7b7e3a7ceb19b6896fef" protocol=ttrpc version=3
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.276910603Z" level=info msg="CreateContainer within sandbox \"4de395773869cc7dfa7289a9a9f472e975bfb3e3d6314b1e46a67f92b2934540\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"efc94048807ebbcf329446ac8949c16a7392055b208364f9e03d0eae632bb378\""
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.277542724Z" level=info msg="StartContainer for \"efc94048807ebbcf329446ac8949c16a7392055b208364f9e03d0eae632bb378\""
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.278513534Z" level=info msg="connecting to shim efc94048807ebbcf329446ac8949c16a7392055b208364f9e03d0eae632bb378" address="unix:///run/containerd/s/44ebb2baff15a7c6e6afe805c3f620ba2ac710b53a58cf2d4b044655f2f3e3b4" protocol=ttrpc version=3
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.337834277Z" level=info msg="StartContainer for \"07248d3fa7700df350c76db3b67044ace82560729e01191bc23c841731cd3cfa\" returns successfully"
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.345187757Z" level=info msg="StartContainer for \"efc94048807ebbcf329446ac8949c16a7392055b208364f9e03d0eae632bb378\" returns successfully"
	Nov 19 02:34:02 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:02.161091209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:21d4a418-fd63-4ac5-922d-cb793556218b,Namespace:default,Attempt:0,}"
	Nov 19 02:34:02 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:02.196961969Z" level=info msg="connecting to shim 59ab2bcb83dd06387c432c2650e201a9cbd86173d58f8d0db1922ad1ddd3dfdd" address="unix:///run/containerd/s/8503a785494125141260421ffc0ed807f49afa13268713db5864942eddb5e97a" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 02:34:02 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:02.281117752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:21d4a418-fd63-4ac5-922d-cb793556218b,Namespace:default,Attempt:0,} returns sandbox id \"59ab2bcb83dd06387c432c2650e201a9cbd86173d58f8d0db1922ad1ddd3dfdd\""
	Nov 19 02:34:02 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:02.283495058Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.383809866Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.384560370Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396646"
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.385692546Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.387641417Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.387999891Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.104463265s"
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.388046276Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.392730733Z" level=info msg="CreateContainer within sandbox \"59ab2bcb83dd06387c432c2650e201a9cbd86173d58f8d0db1922ad1ddd3dfdd\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.400882222Z" level=info msg="Container ab00038d157e9a07c8bb58bb1fb42e71f81dccc9c10114391264bdc14a97ae8e: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.406789913Z" level=info msg="CreateContainer within sandbox \"59ab2bcb83dd06387c432c2650e201a9cbd86173d58f8d0db1922ad1ddd3dfdd\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"ab00038d157e9a07c8bb58bb1fb42e71f81dccc9c10114391264bdc14a97ae8e\""
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.407430115Z" level=info msg="StartContainer for \"ab00038d157e9a07c8bb58bb1fb42e71f81dccc9c10114391264bdc14a97ae8e\""
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.408276224Z" level=info msg="connecting to shim ab00038d157e9a07c8bb58bb1fb42e71f81dccc9c10114391264bdc14a97ae8e" address="unix:///run/containerd/s/8503a785494125141260421ffc0ed807f49afa13268713db5864942eddb5e97a" protocol=ttrpc version=3
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.468576361Z" level=info msg="StartContainer for \"ab00038d157e9a07c8bb58bb1fb42e71f81dccc9c10114391264bdc14a97ae8e\" returns successfully"
	
	
	==> coredns [efc94048807ebbcf329446ac8949c16a7392055b208364f9e03d0eae632bb378] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43809 - 5129 "HINFO IN 5397253924571860853.286299948829042260. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.067993989s
	
	
	==> describe nodes <==
	Name:               embed-certs-168452
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-168452
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=embed-certs-168452
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_33_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:33:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-168452
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:34:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:34:12 +0000   Wed, 19 Nov 2025 02:33:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:34:12 +0000   Wed, 19 Nov 2025 02:33:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:34:12 +0000   Wed, 19 Nov 2025 02:33:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:34:12 +0000   Wed, 19 Nov 2025 02:33:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-168452
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                ba96682e-cbc3-44e5-a3b6-1fb8a6a2ab97
	  Boot ID:                    fea1659d-b751-4f87-a281-819adf52de2d
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-zjkgg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-embed-certs-168452                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-rf6v9                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-168452             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-embed-certs-168452    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-v65n7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-168452             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  32s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node embed-certs-168452 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node embed-certs-168452 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node embed-certs-168452 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node embed-certs-168452 event: Registered Node embed-certs-168452 in Controller
	  Normal  NodeReady                15s   kubelet          Node embed-certs-168452 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[Nov19 02:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 74 0c d7 a6 53 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[ +28.680399] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 e9 7c 92 36 13 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[Nov19 02:32] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	[  +4.552839] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +11.086189] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 76 d1 26 7f 3d 08 06
	[  +0.000377] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[  +9.270754] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a2 49 fd 34 51 3b 08 06
	[  +0.000702] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +23.593864] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 86 43 5f 18 4c 08 06
	[  +0.000495] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	
	
	==> etcd [c41f3c5163fefcf60222c4a4a67440987198fb6dfef8b03ce4bb796c1ab758eb] <==
	{"level":"warn","ts":"2025-11-19T02:33:38.765710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.774487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.782592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.790685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.798745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.807234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.815776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.823438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.830931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.839536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.848592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.856527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.862915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.871123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.880013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.886996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.893756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.902133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.909750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.924567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.935010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.956170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.963848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.970968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:39.018322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59268","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:34:13 up  1:16,  0 user,  load average: 5.67, 4.02, 2.64
	Linux embed-certs-168452 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [627aeabcbd8b9f66bf990c7bdfb13e60420783283ad0163466ec4d8e1e0bd079] <==
	I1119 02:33:48.548662       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:33:48.548906       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1119 02:33:48.549062       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:33:48.549080       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:33:48.549093       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:33:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:33:48.752772       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:33:48.752805       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:33:48.752827       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:33:48.846283       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:33:49.221523       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:33:49.221557       1 metrics.go:72] Registering metrics
	I1119 02:33:49.221651       1 controller.go:711] "Syncing nftables rules"
	I1119 02:33:58.757479       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 02:33:58.757535       1 main.go:301] handling current node
	I1119 02:34:08.754725       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 02:34:08.754773       1 main.go:301] handling current node
	
	
	==> kube-apiserver [429108bffb6c2a8fe9a429fb55f970610b4ab0d090e445d686e4bdb4bb295962] <==
	E1119 02:33:39.600810       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1119 02:33:39.647945       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:33:39.653715       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:33:39.654591       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 02:33:39.660768       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:33:39.661775       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:33:39.754573       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:33:40.452240       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 02:33:40.456529       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 02:33:40.456548       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:33:40.979152       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:33:41.024712       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:33:41.154801       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 02:33:41.161403       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1119 02:33:41.162608       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:33:41.167235       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:33:41.483687       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:33:41.922051       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:33:41.932654       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 02:33:41.940904       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 02:33:47.284898       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 02:33:47.436098       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 02:33:47.587889       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:33:47.592312       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1119 02:34:12.032965       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:41640: use of closed network connection
	
	
	==> kube-controller-manager [45e16dd5855dc98d25ef7af6cc8b57610efe471ab11301d4a7c5def7b1ccc943] <==
	I1119 02:33:46.469408       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:33:46.469433       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:33:46.469443       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 02:33:46.481172       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 02:33:46.482217       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 02:33:46.482234       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 02:33:46.482260       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 02:33:46.482624       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 02:33:46.482679       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 02:33:46.482689       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 02:33:46.482768       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 02:33:46.483439       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 02:33:46.483449       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 02:33:46.483493       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 02:33:46.483496       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 02:33:46.483539       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 02:33:46.483483       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 02:33:46.483609       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 02:33:46.484945       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 02:33:46.487248       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 02:33:46.489503       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:33:46.489512       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:33:46.499705       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 02:33:46.499758       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:34:01.415907       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6a6b45ccc3386f2598d874f487c473de83d06079544c8fed7813366dd5764001] <==
	I1119 02:33:47.918676       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:33:47.990453       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:33:48.090983       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:33:48.091028       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1119 02:33:48.091137       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:33:48.123636       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:33:48.123873       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:33:48.132056       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:33:48.132667       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:33:48.132698       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:33:48.137997       1 config.go:200] "Starting service config controller"
	I1119 02:33:48.138706       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:33:48.140977       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:33:48.141009       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:33:48.141022       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:33:48.141079       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:33:48.141982       1 config.go:309] "Starting node config controller"
	I1119 02:33:48.142786       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:33:48.142818       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:33:48.243583       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:33:48.243635       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 02:33:48.243639       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [16ff9f07197344243d6fe00befa11b307459273134b9a9d4fdf6f89d375e78a5] <==
	I1119 02:33:39.850444       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:33:39.850770       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 02:33:39.850826       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1119 02:33:39.852347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 02:33:39.853543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:33:39.854344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:33:39.854402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:33:39.854732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 02:33:39.854771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:33:39.854801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:33:39.855055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:33:39.856252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:33:39.856298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 02:33:39.856362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:33:39.856425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:33:39.856487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 02:33:39.856571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 02:33:39.856670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:33:39.856697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:33:39.856705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 02:33:39.856795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:33:39.856914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 02:33:40.739427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:33:40.794085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1119 02:33:41.451337       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:33:42 embed-certs-168452 kubelet[1466]: E1119 02:33:42.789229    1466 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-168452\" already exists" pod="kube-system/etcd-embed-certs-168452"
	Nov 19 02:33:42 embed-certs-168452 kubelet[1466]: I1119 02:33:42.800678    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-168452" podStartSLOduration=1.8006546079999999 podStartE2EDuration="1.800654608s" podCreationTimestamp="2025-11-19 02:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:42.800558386 +0000 UTC m=+1.128773561" watchObservedRunningTime="2025-11-19 02:33:42.800654608 +0000 UTC m=+1.128869741"
	Nov 19 02:33:42 embed-certs-168452 kubelet[1466]: I1119 02:33:42.809536    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-168452" podStartSLOduration=1.8095134750000001 podStartE2EDuration="1.809513475s" podCreationTimestamp="2025-11-19 02:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:42.809274996 +0000 UTC m=+1.137490149" watchObservedRunningTime="2025-11-19 02:33:42.809513475 +0000 UTC m=+1.137728613"
	Nov 19 02:33:42 embed-certs-168452 kubelet[1466]: I1119 02:33:42.818091    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-168452" podStartSLOduration=1.818068776 podStartE2EDuration="1.818068776s" podCreationTimestamp="2025-11-19 02:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:42.817975386 +0000 UTC m=+1.146190539" watchObservedRunningTime="2025-11-19 02:33:42.818068776 +0000 UTC m=+1.146283929"
	Nov 19 02:33:42 embed-certs-168452 kubelet[1466]: I1119 02:33:42.827716    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-168452" podStartSLOduration=1.8276958730000001 podStartE2EDuration="1.827695873s" podCreationTimestamp="2025-11-19 02:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:42.827681062 +0000 UTC m=+1.155896217" watchObservedRunningTime="2025-11-19 02:33:42.827695873 +0000 UTC m=+1.155911027"
	Nov 19 02:33:46 embed-certs-168452 kubelet[1466]: I1119 02:33:46.508359    1466 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 02:33:46 embed-certs-168452 kubelet[1466]: I1119 02:33:46.509179    1466 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 02:33:47 embed-certs-168452 kubelet[1466]: I1119 02:33:47.381223    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e29d839-0594-41f7-bfd8-1f9ab66b4c86-xtables-lock\") pod \"kindnet-rf6v9\" (UID: \"6e29d839-0594-41f7-bfd8-1f9ab66b4c86\") " pod="kube-system/kindnet-rf6v9"
	Nov 19 02:33:47 embed-certs-168452 kubelet[1466]: I1119 02:33:47.381397    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e29d839-0594-41f7-bfd8-1f9ab66b4c86-lib-modules\") pod \"kindnet-rf6v9\" (UID: \"6e29d839-0594-41f7-bfd8-1f9ab66b4c86\") " pod="kube-system/kindnet-rf6v9"
	Nov 19 02:33:47 embed-certs-168452 kubelet[1466]: I1119 02:33:47.381443    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edc341f0-decd-4b30-a13d-a730cb8fc47d-xtables-lock\") pod \"kube-proxy-v65n7\" (UID: \"edc341f0-decd-4b30-a13d-a730cb8fc47d\") " pod="kube-system/kube-proxy-v65n7"
	Nov 19 02:33:47 embed-certs-168452 kubelet[1466]: I1119 02:33:47.381468    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edc341f0-decd-4b30-a13d-a730cb8fc47d-lib-modules\") pod \"kube-proxy-v65n7\" (UID: \"edc341f0-decd-4b30-a13d-a730cb8fc47d\") " pod="kube-system/kube-proxy-v65n7"
	Nov 19 02:33:47 embed-certs-168452 kubelet[1466]: I1119 02:33:47.381487    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6e29d839-0594-41f7-bfd8-1f9ab66b4c86-cni-cfg\") pod \"kindnet-rf6v9\" (UID: \"6e29d839-0594-41f7-bfd8-1f9ab66b4c86\") " pod="kube-system/kindnet-rf6v9"
	Nov 19 02:33:47 embed-certs-168452 kubelet[1466]: I1119 02:33:47.381525    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/edc341f0-decd-4b30-a13d-a730cb8fc47d-kube-proxy\") pod \"kube-proxy-v65n7\" (UID: \"edc341f0-decd-4b30-a13d-a730cb8fc47d\") " pod="kube-system/kube-proxy-v65n7"
	Nov 19 02:33:47 embed-certs-168452 kubelet[1466]: I1119 02:33:47.381550    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tzcs\" (UniqueName: \"kubernetes.io/projected/6e29d839-0594-41f7-bfd8-1f9ab66b4c86-kube-api-access-5tzcs\") pod \"kindnet-rf6v9\" (UID: \"6e29d839-0594-41f7-bfd8-1f9ab66b4c86\") " pod="kube-system/kindnet-rf6v9"
	Nov 19 02:33:47 embed-certs-168452 kubelet[1466]: I1119 02:33:47.381622    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9762k\" (UniqueName: \"kubernetes.io/projected/edc341f0-decd-4b30-a13d-a730cb8fc47d-kube-api-access-9762k\") pod \"kube-proxy-v65n7\" (UID: \"edc341f0-decd-4b30-a13d-a730cb8fc47d\") " pod="kube-system/kube-proxy-v65n7"
	Nov 19 02:33:48 embed-certs-168452 kubelet[1466]: I1119 02:33:48.807341    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v65n7" podStartSLOduration=1.807314915 podStartE2EDuration="1.807314915s" podCreationTimestamp="2025-11-19 02:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:48.806589928 +0000 UTC m=+7.134805081" watchObservedRunningTime="2025-11-19 02:33:48.807314915 +0000 UTC m=+7.135530067"
	Nov 19 02:33:48 embed-certs-168452 kubelet[1466]: I1119 02:33:48.821123    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rf6v9" podStartSLOduration=1.821100286 podStartE2EDuration="1.821100286s" podCreationTimestamp="2025-11-19 02:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:48.820893573 +0000 UTC m=+7.149108727" watchObservedRunningTime="2025-11-19 02:33:48.821100286 +0000 UTC m=+7.149315451"
	Nov 19 02:33:58 embed-certs-168452 kubelet[1466]: I1119 02:33:58.798002    1466 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 02:33:58 embed-certs-168452 kubelet[1466]: I1119 02:33:58.870226    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3-config-volume\") pod \"coredns-66bc5c9577-zjkgg\" (UID: \"5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3\") " pod="kube-system/coredns-66bc5c9577-zjkgg"
	Nov 19 02:33:58 embed-certs-168452 kubelet[1466]: I1119 02:33:58.870304    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thllq\" (UniqueName: \"kubernetes.io/projected/5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3-kube-api-access-thllq\") pod \"coredns-66bc5c9577-zjkgg\" (UID: \"5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3\") " pod="kube-system/coredns-66bc5c9577-zjkgg"
	Nov 19 02:33:58 embed-certs-168452 kubelet[1466]: I1119 02:33:58.870338    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/eebce997-029a-4da2-b6cd-bb0ff195ebbe-tmp\") pod \"storage-provisioner\" (UID: \"eebce997-029a-4da2-b6cd-bb0ff195ebbe\") " pod="kube-system/storage-provisioner"
	Nov 19 02:33:58 embed-certs-168452 kubelet[1466]: I1119 02:33:58.870731    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dslfc\" (UniqueName: \"kubernetes.io/projected/eebce997-029a-4da2-b6cd-bb0ff195ebbe-kube-api-access-dslfc\") pod \"storage-provisioner\" (UID: \"eebce997-029a-4da2-b6cd-bb0ff195ebbe\") " pod="kube-system/storage-provisioner"
	Nov 19 02:33:59 embed-certs-168452 kubelet[1466]: I1119 02:33:59.831649    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zjkgg" podStartSLOduration=12.831625567 podStartE2EDuration="12.831625567s" podCreationTimestamp="2025-11-19 02:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:59.831400228 +0000 UTC m=+18.159615402" watchObservedRunningTime="2025-11-19 02:33:59.831625567 +0000 UTC m=+18.159840720"
	Nov 19 02:33:59 embed-certs-168452 kubelet[1466]: I1119 02:33:59.853960    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.853935943 podStartE2EDuration="11.853935943s" podCreationTimestamp="2025-11-19 02:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:59.853561665 +0000 UTC m=+18.181776818" watchObservedRunningTime="2025-11-19 02:33:59.853935943 +0000 UTC m=+18.182151096"
	Nov 19 02:34:01 embed-certs-168452 kubelet[1466]: I1119 02:34:01.889306    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krtnz\" (UniqueName: \"kubernetes.io/projected/21d4a418-fd63-4ac5-922d-cb793556218b-kube-api-access-krtnz\") pod \"busybox\" (UID: \"21d4a418-fd63-4ac5-922d-cb793556218b\") " pod="default/busybox"
	
	
	==> storage-provisioner [07248d3fa7700df350c76db3b67044ace82560729e01191bc23c841731cd3cfa] <==
	I1119 02:33:59.355307       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:33:59.366408       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:33:59.366484       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 02:33:59.370339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:59.376043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:33:59.376221       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:33:59.376292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4f3365e9-4f71-41d4-a675-26dba5ec0200", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-168452_382dea49-21ad-4002-8312-0e31e936f03e became leader
	I1119 02:33:59.376507       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-168452_382dea49-21ad-4002-8312-0e31e936f03e!
	W1119 02:33:59.379410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:59.384523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:33:59.476976       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-168452_382dea49-21ad-4002-8312-0e31e936f03e!
	W1119 02:34:01.388541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:01.394432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:03.399022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:03.403255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:05.407192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:05.411112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:07.414502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:07.418234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:09.421803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:09.428291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:11.431520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:11.435960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:13.439701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:13.444437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-168452 -n embed-certs-168452
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-168452 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-168452
helpers_test.go:243: (dbg) docker inspect embed-certs-168452:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "14fb37aefb5bd1cb8ec42ec109da06032662983ae02a3fa83036ce381167f905",
	        "Created": "2025-11-19T02:33:25.873238592Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 316534,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:33:25.915223704Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/14fb37aefb5bd1cb8ec42ec109da06032662983ae02a3fa83036ce381167f905/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/14fb37aefb5bd1cb8ec42ec109da06032662983ae02a3fa83036ce381167f905/hostname",
	        "HostsPath": "/var/lib/docker/containers/14fb37aefb5bd1cb8ec42ec109da06032662983ae02a3fa83036ce381167f905/hosts",
	        "LogPath": "/var/lib/docker/containers/14fb37aefb5bd1cb8ec42ec109da06032662983ae02a3fa83036ce381167f905/14fb37aefb5bd1cb8ec42ec109da06032662983ae02a3fa83036ce381167f905-json.log",
	        "Name": "/embed-certs-168452",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-168452:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-168452",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "14fb37aefb5bd1cb8ec42ec109da06032662983ae02a3fa83036ce381167f905",
	                "LowerDir": "/var/lib/docker/overlay2/2c7e61b62a859781345fc39605045f5b0ddb25e8581ee80965ef7e33e7ef9e35-init/diff:/var/lib/docker/overlay2/de7938e6a920c133c8c6b988444cfbf6706fdc6982445229ca70e2488a725edb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c7e61b62a859781345fc39605045f5b0ddb25e8581ee80965ef7e33e7ef9e35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c7e61b62a859781345fc39605045f5b0ddb25e8581ee80965ef7e33e7ef9e35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c7e61b62a859781345fc39605045f5b0ddb25e8581ee80965ef7e33e7ef9e35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-168452",
	                "Source": "/var/lib/docker/volumes/embed-certs-168452/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-168452",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-168452",
	                "name.minikube.sigs.k8s.io": "embed-certs-168452",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "679abce52d7b7add15d270d125bfda95287d1d3669dc0ecae8498e1d1004ba08",
	            "SandboxKey": "/var/run/docker/netns/679abce52d7b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-168452": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "140e12fde1bf50003f04a6d771e1153e0e129959991946ee8cd5220e8e5fd632",
	                    "EndpointID": "7fdb01e4ecdc7fdfbc3eb09964dbd1e688c65968c77d3ae3e283cdaf220f296c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "f2:15:1b:af:0f:f9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-168452",
	                        "14fb37aefb5b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-168452 -n embed-certs-168452
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-168452 logs -n 25
E1119 02:34:14.467351   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/auto-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:14.473744   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/auto-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:14.485214   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/auto-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:14.507662   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/auto-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:14.548981   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/auto-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:14.630419   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/auto-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:14.791974   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/auto-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:15.114098   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/auto-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-168452 logs -n 25: (1.305296482s)
E1119 02:34:15.756733   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/auto-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-212776 sudo systemctl cat docker --no-pager                                                                                                                                                                                               │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo docker system info                                                                                                                                                                                                            │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cri-dockerd --version                                                                                                                                                                                                         │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo containerd config dump                                                                                                                                                                                                        │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │                     │
	│ ssh     │ -p bridge-212776 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ ssh     │ -p bridge-212776 sudo crio config                                                                                                                                                                                                                   │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ delete  │ -p bridge-212776                                                                                                                                                                                                                                    │ bridge-212776          │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ start   │ -p embed-certs-168452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-168452     │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:34 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-691094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-691094 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
	│ stop    │ -p old-k8s-version-691094 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-691094 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:34 UTC │
	│ addons  │ enable metrics-server -p no-preload-483142 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-483142      │ jenkins │ v1.37.0 │ 19 Nov 25 02:34 UTC │ 19 Nov 25 02:34 UTC │
	│ stop    │ -p no-preload-483142 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-483142      │ jenkins │ v1.37.0 │ 19 Nov 25 02:34 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-691094 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-691094 │ jenkins │ v1.37.0 │ 19 Nov 25 02:34 UTC │ 19 Nov 25 02:34 UTC │
	│ start   │ -p old-k8s-version-691094 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-691094 │ jenkins │ v1.37.0 │ 19 Nov 25 02:34 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:34:04
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:34:04.919150  324696 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:34:04.919282  324696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:34:04.919287  324696 out.go:374] Setting ErrFile to fd 2...
	I1119 02:34:04.919291  324696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:34:04.919526  324696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:34:04.919996  324696 out.go:368] Setting JSON to false
	I1119 02:34:04.921305  324696 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4585,"bootTime":1763515060,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:34:04.921427  324696 start.go:143] virtualization: kvm guest
	I1119 02:34:04.923797  324696 out.go:179] * [old-k8s-version-691094] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:34:04.925360  324696 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:34:04.925400  324696 notify.go:221] Checking for updates...
	I1119 02:34:04.928271  324696 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:34:04.929908  324696 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:34:04.931330  324696 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	I1119 02:34:04.932627  324696 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:34:04.934014  324696 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:34:04.936017  324696 config.go:182] Loaded profile config "old-k8s-version-691094": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 02:34:04.937743  324696 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1119 02:34:04.938948  324696 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:34:04.966788  324696 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:34:04.966946  324696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:34:05.032906  324696 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:34:05.020949459 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:34:05.033104  324696 docker.go:319] overlay module found
	I1119 02:34:05.035095  324696 out.go:179] * Using the docker driver based on existing profile
	I1119 02:34:05.036228  324696 start.go:309] selected driver: docker
	I1119 02:34:05.036251  324696 start.go:930] validating driver "docker" against &{Name:old-k8s-version-691094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-691094 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountSt
ring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:34:05.036360  324696 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:34:05.037062  324696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:34:05.094450  324696 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-19 02:34:05.084491917 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:34:05.094705  324696 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:34:05.094732  324696 cni.go:84] Creating CNI manager for ""
	I1119 02:34:05.094775  324696 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:34:05.094801  324696 start.go:353] cluster config:
	{Name:old-k8s-version-691094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-691094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:34:05.096713  324696 out.go:179] * Starting "old-k8s-version-691094" primary control-plane node in "old-k8s-version-691094" cluster
	I1119 02:34:05.097833  324696 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 02:34:05.098936  324696 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:34:05.100075  324696 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1119 02:34:05.100110  324696 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1119 02:34:05.100119  324696 cache.go:65] Caching tarball of preloaded images
	I1119 02:34:05.100190  324696 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:34:05.100202  324696 preload.go:238] Found /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1119 02:34:05.100211  324696 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1119 02:34:05.100314  324696 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/config.json ...
	I1119 02:34:05.120962  324696 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:34:05.120979  324696 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:34:05.120995  324696 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:34:05.121027  324696 start.go:360] acquireMachinesLock for old-k8s-version-691094: {Name:mkfb000600dc66dbf8c170048dfbe67bdac66bf9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:34:05.121100  324696 start.go:364] duration metric: took 40.433µs to acquireMachinesLock for "old-k8s-version-691094"
	I1119 02:34:05.121123  324696 start.go:96] Skipping create...Using existing machine configuration
	I1119 02:34:05.121133  324696 fix.go:54] fixHost starting: 
	I1119 02:34:05.121390  324696 cli_runner.go:164] Run: docker container inspect old-k8s-version-691094 --format={{.State.Status}}
	I1119 02:34:05.139585  324696 fix.go:112] recreateIfNeeded on old-k8s-version-691094: state=Stopped err=<nil>
	W1119 02:34:05.139615  324696 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 02:34:05.141570  324696 out.go:252] * Restarting existing docker container for "old-k8s-version-691094" ...
	I1119 02:34:05.141642  324696 cli_runner.go:164] Run: docker start old-k8s-version-691094
	I1119 02:34:05.423915  324696 cli_runner.go:164] Run: docker container inspect old-k8s-version-691094 --format={{.State.Status}}
	I1119 02:34:05.443823  324696 kic.go:430] container "old-k8s-version-691094" state is running.
	I1119 02:34:05.444247  324696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-691094
	I1119 02:34:05.464675  324696 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/config.json ...
	I1119 02:34:05.464891  324696 machine.go:94] provisionDockerMachine start ...
	I1119 02:34:05.464954  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:05.484472  324696 main.go:143] libmachine: Using SSH client type: native
	I1119 02:34:05.484779  324696 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1119 02:34:05.484794  324696 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:34:05.485567  324696 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57472->127.0.0.1:33110: read: connection reset by peer
	I1119 02:34:08.621350  324696 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-691094
	
	I1119 02:34:08.621417  324696 ubuntu.go:182] provisioning hostname "old-k8s-version-691094"
	I1119 02:34:08.621483  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:08.639845  324696 main.go:143] libmachine: Using SSH client type: native
	I1119 02:34:08.640137  324696 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1119 02:34:08.640158  324696 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-691094 && echo "old-k8s-version-691094" | sudo tee /etc/hostname
	I1119 02:34:08.781521  324696 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-691094
	
	I1119 02:34:08.781605  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:08.801090  324696 main.go:143] libmachine: Using SSH client type: native
	I1119 02:34:08.801287  324696 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1119 02:34:08.801303  324696 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-691094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-691094/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-691094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:34:08.935253  324696 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:34:08.935285  324696 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11107/.minikube}
	I1119 02:34:08.935311  324696 ubuntu.go:190] setting up certificates
	I1119 02:34:08.935321  324696 provision.go:84] configureAuth start
	I1119 02:34:08.935410  324696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-691094
	I1119 02:34:08.954025  324696 provision.go:143] copyHostCerts
	I1119 02:34:08.954084  324696 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem, removing ...
	I1119 02:34:08.954095  324696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem
	I1119 02:34:08.954168  324696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem (1082 bytes)
	I1119 02:34:08.954269  324696 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem, removing ...
	I1119 02:34:08.954277  324696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem
	I1119 02:34:08.954313  324696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem (1123 bytes)
	I1119 02:34:08.954408  324696 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem, removing ...
	I1119 02:34:08.954419  324696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem
	I1119 02:34:08.954455  324696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem (1675 bytes)
	I1119 02:34:08.954531  324696 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-691094 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-691094]
	I1119 02:34:09.259240  324696 provision.go:177] copyRemoteCerts
	I1119 02:34:09.259314  324696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:34:09.259356  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:09.277923  324696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:34:09.374190  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:34:09.393265  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1119 02:34:09.411935  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:34:09.432717  324696 provision.go:87] duration metric: took 497.361291ms to configureAuth
	I1119 02:34:09.432754  324696 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:34:09.432966  324696 config.go:182] Loaded profile config "old-k8s-version-691094": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 02:34:09.432979  324696 machine.go:97] duration metric: took 3.968076168s to provisionDockerMachine
	I1119 02:34:09.432987  324696 start.go:293] postStartSetup for "old-k8s-version-691094" (driver="docker")
	I1119 02:34:09.432998  324696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:34:09.433047  324696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:34:09.433079  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:09.452076  324696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:34:09.549428  324696 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:34:09.554004  324696 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:34:09.554030  324696 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:34:09.554039  324696 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/addons for local assets ...
	I1119 02:34:09.554091  324696 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/files for local assets ...
	I1119 02:34:09.554175  324696 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem -> 146572.pem in /etc/ssl/certs
	I1119 02:34:09.554302  324696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:34:09.562493  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:34:09.580517  324696 start.go:296] duration metric: took 147.513177ms for postStartSetup
	I1119 02:34:09.580594  324696 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:34:09.580640  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:09.599654  324696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:34:09.692833  324696 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:34:09.697541  324696 fix.go:56] duration metric: took 4.576402301s for fixHost
	I1119 02:34:09.697567  324696 start.go:83] releasing machines lock for "old-k8s-version-691094", held for 4.576454368s
	I1119 02:34:09.697662  324696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-691094
	I1119 02:34:09.716331  324696 ssh_runner.go:195] Run: cat /version.json
	I1119 02:34:09.716402  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:09.716409  324696 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:34:09.716490  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:09.736675  324696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:34:09.738061  324696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
	I1119 02:34:09.829692  324696 ssh_runner.go:195] Run: systemctl --version
	I1119 02:34:09.887465  324696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:34:09.892578  324696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:34:09.892646  324696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:34:09.901399  324696 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 02:34:09.901428  324696 start.go:496] detecting cgroup driver to use...
	I1119 02:34:09.901458  324696 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:34:09.901495  324696 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 02:34:09.919955  324696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 02:34:09.934028  324696 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:34:09.934084  324696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:34:09.949634  324696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:34:09.962553  324696 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:34:10.049637  324696 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:34:10.129412  324696 docker.go:234] disabling docker service ...
	I1119 02:34:10.129481  324696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:34:10.144428  324696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:34:10.157554  324696 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:34:10.239317  324696 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:34:10.316126  324696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:34:10.329220  324696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:34:10.344358  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1119 02:34:10.353528  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 02:34:10.362893  324696 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 02:34:10.362961  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 02:34:10.372376  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:34:10.381551  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 02:34:10.391045  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:34:10.400284  324696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:34:10.408739  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 02:34:10.417833  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 02:34:10.426785  324696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 02:34:10.436234  324696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:34:10.443826  324696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:34:10.451672  324696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:34:10.532639  324696 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 02:34:10.644692  324696 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 02:34:10.644752  324696 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 02:34:10.648866  324696 start.go:564] Will wait 60s for crictl version
	I1119 02:34:10.648925  324696 ssh_runner.go:195] Run: which crictl
	I1119 02:34:10.652785  324696 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:34:10.678515  324696 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 02:34:10.678580  324696 ssh_runner.go:195] Run: containerd --version
	I1119 02:34:10.699600  324696 ssh_runner.go:195] Run: containerd --version
	I1119 02:34:10.721969  324696 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1119 02:34:10.723273  324696 cli_runner.go:164] Run: docker network inspect old-k8s-version-691094 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:34:10.742836  324696 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1119 02:34:10.746987  324696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:34:10.757193  324696 kubeadm.go:884] updating cluster {Name:old-k8s-version-691094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-691094 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:34:10.757295  324696 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1119 02:34:10.757339  324696 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:34:10.783775  324696 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:34:10.783796  324696 containerd.go:534] Images already preloaded, skipping extraction
	I1119 02:34:10.783845  324696 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:34:10.809574  324696 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:34:10.809594  324696 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:34:10.809602  324696 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 containerd true true} ...
	I1119 02:34:10.809703  324696 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-691094 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-691094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:34:10.809764  324696 ssh_runner.go:195] Run: sudo crictl info
	I1119 02:34:10.837391  324696 cni.go:84] Creating CNI manager for ""
	I1119 02:34:10.837414  324696 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:34:10.837430  324696 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:34:10.837448  324696 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-691094 NodeName:old-k8s-version-691094 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:34:10.837570  324696 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-691094"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:34:10.837624  324696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1119 02:34:10.846499  324696 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:34:10.846567  324696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:34:10.854749  324696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1119 02:34:10.867992  324696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:34:10.880458  324696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I1119 02:34:10.893442  324696 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:34:10.897416  324696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:34:10.907534  324696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:34:10.989244  324696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:34:11.016308  324696 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094 for IP: 192.168.103.2
	I1119 02:34:11.016329  324696 certs.go:195] generating shared ca certs ...
	I1119 02:34:11.016347  324696 certs.go:227] acquiring lock for ca certs: {Name:mk11d6789b2333e17b3937495b501fbcca15c242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:34:11.016511  324696 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key
	I1119 02:34:11.016589  324696 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key
	I1119 02:34:11.016608  324696 certs.go:257] generating profile certs ...
	I1119 02:34:11.016704  324696 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/client.key
	I1119 02:34:11.016754  324696 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/apiserver.key.f11a8208
	I1119 02:34:11.016788  324696 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/proxy-client.key
	I1119 02:34:11.016891  324696 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem (1338 bytes)
	W1119 02:34:11.016918  324696 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657_empty.pem, impossibly tiny 0 bytes
	I1119 02:34:11.016926  324696 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 02:34:11.016954  324696 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:34:11.016981  324696 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:34:11.017012  324696 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem (1675 bytes)
	I1119 02:34:11.017069  324696 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:34:11.017776  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:34:11.037349  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 02:34:11.057607  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:34:11.076555  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:34:11.100260  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1119 02:34:11.123800  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 02:34:11.144085  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:34:11.163622  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/old-k8s-version-691094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:34:11.182393  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:34:11.200995  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem --> /usr/share/ca-certificates/14657.pem (1338 bytes)
	I1119 02:34:11.221567  324696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /usr/share/ca-certificates/146572.pem (1708 bytes)
	I1119 02:34:11.239423  324696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:34:11.254012  324696 ssh_runner.go:195] Run: openssl version
	I1119 02:34:11.260448  324696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146572.pem && ln -fs /usr/share/ca-certificates/146572.pem /etc/ssl/certs/146572.pem"
	I1119 02:34:11.269712  324696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146572.pem
	I1119 02:34:11.273689  324696 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146572.pem
	I1119 02:34:11.273747  324696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146572.pem
	I1119 02:34:11.309318  324696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146572.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:34:11.318183  324696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:34:11.326691  324696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:34:11.330473  324696 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:57 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:34:11.330521  324696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:34:11.366523  324696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:34:11.375398  324696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14657.pem && ln -fs /usr/share/ca-certificates/14657.pem /etc/ssl/certs/14657.pem"
	I1119 02:34:11.384554  324696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14657.pem
	I1119 02:34:11.388814  324696 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14657.pem
	I1119 02:34:11.388877  324696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14657.pem
	I1119 02:34:11.425041  324696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14657.pem /etc/ssl/certs/51391683.0"
	I1119 02:34:11.435060  324696 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:34:11.439312  324696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 02:34:11.474087  324696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 02:34:11.508880  324696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 02:34:11.543396  324696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 02:34:11.588583  324696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 02:34:11.638230  324696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 02:34:11.693651  324696 kubeadm.go:401] StartCluster: {Name:old-k8s-version-691094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-691094 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:34:11.693762  324696 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 02:34:11.693825  324696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:34:11.727661  324696 cri.go:89] found id: "c1615f8b7603bea728b5006e32d7828f14c61d090c32205f627734bd31dbfc55"
	I1119 02:34:11.727692  324696 cri.go:89] found id: "64f7107f2fd5d904d3ca02f88e2104c599bec7b13a530829ee0c761e554b6557"
	I1119 02:34:11.727698  324696 cri.go:89] found id: "9cc7faf54132ee2da8525106f2f0da439e1d48a41206667ecc42c4add5564fcc"
	I1119 02:34:11.727702  324696 cri.go:89] found id: "1e139eec825de9114abd6701b9ab42ee2b8ab9b766ece6ead08550a8ad647722"
	I1119 02:34:11.727705  324696 cri.go:89] found id: "e773989cb5b9719f34c18e9670a458f821a72e3b0c1f48c1667978ae16fa12a4"
	I1119 02:34:11.727709  324696 cri.go:89] found id: "dda3cde60adcefe6dc905f202c5021fdb56f1c94c37adce1fdae5c18d6080acc"
	I1119 02:34:11.727713  324696 cri.go:89] found id: "5dde09d6b5534707795709157ee81edeb05e31172278aaf5526347ba15edf149"
	I1119 02:34:11.727716  324696 cri.go:89] found id: "ae40aa345e79cbe278439afee2a5038c48c1ac05f3405d97259e5af73e3fbf92"
	I1119 02:34:11.727723  324696 cri.go:89] found id: "b77b79fa6a466aa3e18c8bd7eba3c607337982e750126d443bc923b253db1773"
	I1119 02:34:11.727736  324696 cri.go:89] found id: "dbc14fc0cc43a9945343d07a4033d270d1157c5a3b861d1386847247f42a1497"
	I1119 02:34:11.727741  324696 cri.go:89] found id: "2710c5af3eee6491ef45de25344cda5fa8a6bddc3604a03908e7ec36cc3ec259"
	I1119 02:34:11.727747  324696 cri.go:89] found id: ""
	I1119 02:34:11.727797  324696 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1119 02:34:11.757594  324696 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"0d529a5e629230c31fb73b3d2ed3cfce798ecc4d524c6ad341c7d6de85135800","pid":815,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d529a5e629230c31fb73b3d2ed3cfce798ecc4d524c6ad341c7d6de85135800","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d529a5e629230c31fb73b3d2ed3cfce798ecc4d524c6ad341c7d6de85135800/rootfs","created":"2025-11-19T02:34:11.619427608Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"0d529a5e629230c31fb73b3d2ed3cfce798ecc4d524c6ad341c7d6de85135800","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-691094_43c6e24af58f4532899857c154187af1","io.kubernetes.cri.sandbox-mem
ory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-old-k8s-version-691094","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"43c6e24af58f4532899857c154187af1"},"owner":"root"},{"ociVersion":"1.2.1","id":"35d29561c1f5d9397b4944b18bf98459d3454b169cc0c4ebd658a27a3500efef","pid":868,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35d29561c1f5d9397b4944b18bf98459d3454b169cc0c4ebd658a27a3500efef","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35d29561c1f5d9397b4944b18bf98459d3454b169cc0c4ebd658a27a3500efef/rootfs","created":"2025-11-19T02:34:11.659902181Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"35d29561c1f5d9397b4944b18bf98459d3454b169cc0c4ebd658a27a3500efef","io.kubernet
es.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-691094_84ef4fdb4d0c12a012863d9b76078617","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-old-k8s-version-691094","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"84ef4fdb4d0c12a012863d9b76078617"},"owner":"root"},{"ociVersion":"1.2.1","id":"64f7107f2fd5d904d3ca02f88e2104c599bec7b13a530829ee0c761e554b6557","pid":939,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/64f7107f2fd5d904d3ca02f88e2104c599bec7b13a530829ee0c761e554b6557","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/64f7107f2fd5d904d3ca02f88e2104c599bec7b13a530829ee0c761e554b6557/rootfs","created":"2025-11-19T02:34:11.738650447Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"
723d9c424dadf8757aed6fa281d70acfbc64ff1356e77d613f78aee773d061b5","io.kubernetes.cri.sandbox-name":"etcd-old-k8s-version-691094","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4f725102ec78b74a198d6aa56d892f56"},"owner":"root"},{"ociVersion":"1.2.1","id":"723d9c424dadf8757aed6fa281d70acfbc64ff1356e77d613f78aee773d061b5","pid":833,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/723d9c424dadf8757aed6fa281d70acfbc64ff1356e77d613f78aee773d061b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/723d9c424dadf8757aed6fa281d70acfbc64ff1356e77d613f78aee773d061b5/rootfs","created":"2025-11-19T02:34:11.622200095Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"723d9c424dadf8757aed6fa281d70ac
fbc64ff1356e77d613f78aee773d061b5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8s-version-691094_4f725102ec78b74a198d6aa56d892f56","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-old-k8s-version-691094","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4f725102ec78b74a198d6aa56d892f56"},"owner":"root"},{"ociVersion":"1.2.1","id":"769c8020ea41c57ba98f823e94ac992c01cb7b712588250eb6d55cb0c50b7475","pid":870,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/769c8020ea41c57ba98f823e94ac992c01cb7b712588250eb6d55cb0c50b7475","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/769c8020ea41c57ba98f823e94ac992c01cb7b712588250eb6d55cb0c50b7475/rootfs","created":"2025-11-19T02:34:11.65982413Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernet
es.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"769c8020ea41c57ba98f823e94ac992c01cb7b712588250eb6d55cb0c50b7475","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-691094_4bd64ce31c0d565619382fafb2d03a51","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-old-k8s-version-691094","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4bd64ce31c0d565619382fafb2d03a51"},"owner":"root"},{"ociVersion":"1.2.1","id":"927b1e80c6867d0792be216dabd30704cdb03840f55b022c96a6e4f5b0fe51e5","pid":978,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/927b1e80c6867d0792be216dabd30704cdb03840f55b022c96a6e4f5b0fe51e5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/927b1e80c6867d0792be216dabd30704cdb03840f55b022c96a6e4f5b0fe51e5/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.contain
er-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.0","io.kubernetes.cri.sandbox-id":"35d29561c1f5d9397b4944b18bf98459d3454b169cc0c4ebd658a27a3500efef","io.kubernetes.cri.sandbox-name":"kube-controller-manager-old-k8s-version-691094","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"84ef4fdb4d0c12a012863d9b76078617"},"owner":"root"},{"ociVersion":"1.2.1","id":"9cc7faf54132ee2da8525106f2f0da439e1d48a41206667ecc42c4add5564fcc","pid":941,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cc7faf54132ee2da8525106f2f0da439e1d48a41206667ecc42c4add5564fcc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cc7faf54132ee2da8525106f2f0da439e1d48a41206667ecc42c4add5564fcc/rootfs","created":"2025-11-19T02:34:11.750591758Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container",
"io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.0","io.kubernetes.cri.sandbox-id":"0d529a5e629230c31fb73b3d2ed3cfce798ecc4d524c6ad341c7d6de85135800","io.kubernetes.cri.sandbox-name":"kube-apiserver-old-k8s-version-691094","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"43c6e24af58f4532899857c154187af1"},"owner":"root"},{"ociVersion":"1.2.1","id":"c1615f8b7603bea728b5006e32d7828f14c61d090c32205f627734bd31dbfc55","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1615f8b7603bea728b5006e32d7828f14c61d090c32205f627734bd31dbfc55","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1615f8b7603bea728b5006e32d7828f14c61d090c32205f627734bd31dbfc55/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.0","io.kubernetes.cri.sandbox-id":"769c8020
ea41c57ba98f823e94ac992c01cb7b712588250eb6d55cb0c50b7475","io.kubernetes.cri.sandbox-name":"kube-scheduler-old-k8s-version-691094","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4bd64ce31c0d565619382fafb2d03a51"},"owner":"root"}]
	I1119 02:34:11.757851  324696 cri.go:126] list returned 8 containers
	I1119 02:34:11.757869  324696 cri.go:129] container: {ID:0d529a5e629230c31fb73b3d2ed3cfce798ecc4d524c6ad341c7d6de85135800 Status:running}
	I1119 02:34:11.757927  324696 cri.go:131] skipping 0d529a5e629230c31fb73b3d2ed3cfce798ecc4d524c6ad341c7d6de85135800 - not in ps
	I1119 02:34:11.757935  324696 cri.go:129] container: {ID:35d29561c1f5d9397b4944b18bf98459d3454b169cc0c4ebd658a27a3500efef Status:running}
	I1119 02:34:11.757941  324696 cri.go:131] skipping 35d29561c1f5d9397b4944b18bf98459d3454b169cc0c4ebd658a27a3500efef - not in ps
	I1119 02:34:11.757945  324696 cri.go:129] container: {ID:64f7107f2fd5d904d3ca02f88e2104c599bec7b13a530829ee0c761e554b6557 Status:created}
	I1119 02:34:11.757955  324696 cri.go:135] skipping {64f7107f2fd5d904d3ca02f88e2104c599bec7b13a530829ee0c761e554b6557 created}: state = "created", want "paused"
	I1119 02:34:11.757966  324696 cri.go:129] container: {ID:723d9c424dadf8757aed6fa281d70acfbc64ff1356e77d613f78aee773d061b5 Status:running}
	I1119 02:34:11.757972  324696 cri.go:131] skipping 723d9c424dadf8757aed6fa281d70acfbc64ff1356e77d613f78aee773d061b5 - not in ps
	I1119 02:34:11.757977  324696 cri.go:129] container: {ID:769c8020ea41c57ba98f823e94ac992c01cb7b712588250eb6d55cb0c50b7475 Status:running}
	I1119 02:34:11.757983  324696 cri.go:131] skipping 769c8020ea41c57ba98f823e94ac992c01cb7b712588250eb6d55cb0c50b7475 - not in ps
	I1119 02:34:11.757987  324696 cri.go:129] container: {ID:927b1e80c6867d0792be216dabd30704cdb03840f55b022c96a6e4f5b0fe51e5 Status:created}
	I1119 02:34:11.757992  324696 cri.go:131] skipping 927b1e80c6867d0792be216dabd30704cdb03840f55b022c96a6e4f5b0fe51e5 - not in ps
	I1119 02:34:11.757997  324696 cri.go:129] container: {ID:9cc7faf54132ee2da8525106f2f0da439e1d48a41206667ecc42c4add5564fcc Status:created}
	I1119 02:34:11.758006  324696 cri.go:135] skipping {9cc7faf54132ee2da8525106f2f0da439e1d48a41206667ecc42c4add5564fcc created}: state = "created", want "paused"
	I1119 02:34:11.758012  324696 cri.go:129] container: {ID:c1615f8b7603bea728b5006e32d7828f14c61d090c32205f627734bd31dbfc55 Status:stopped}
	I1119 02:34:11.758018  324696 cri.go:135] skipping {c1615f8b7603bea728b5006e32d7828f14c61d090c32205f627734bd31dbfc55 stopped}: state = "stopped", want "paused"
	I1119 02:34:11.758070  324696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:34:11.769466  324696 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 02:34:11.769502  324696 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 02:34:11.769563  324696 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 02:34:11.783665  324696 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:34:11.785268  324696 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-691094" does not appear in /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:34:11.786176  324696 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-11107/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-691094" cluster setting kubeconfig missing "old-k8s-version-691094" context setting]
	I1119 02:34:11.787491  324696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:34:11.789618  324696 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 02:34:11.803032  324696 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1119 02:34:11.803084  324696 kubeadm.go:602] duration metric: took 33.570577ms to restartPrimaryControlPlane
	I1119 02:34:11.803096  324696 kubeadm.go:403] duration metric: took 109.455267ms to StartCluster
	I1119 02:34:11.803117  324696 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:34:11.803175  324696 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:34:11.805728  324696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:34:11.806207  324696 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:34:11.806469  324696 config.go:182] Loaded profile config "old-k8s-version-691094": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1119 02:34:11.806484  324696 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:34:11.806986  324696 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-691094"
	I1119 02:34:11.807004  324696 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-691094"
	W1119 02:34:11.807012  324696 addons.go:248] addon storage-provisioner should already be in state true
	I1119 02:34:11.807040  324696 host.go:66] Checking if "old-k8s-version-691094" exists ...
	I1119 02:34:11.807558  324696 cli_runner.go:164] Run: docker container inspect old-k8s-version-691094 --format={{.State.Status}}
	I1119 02:34:11.807606  324696 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-691094"
	I1119 02:34:11.807632  324696 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-691094"
	I1119 02:34:11.807886  324696 addons.go:70] Setting metrics-server=true in profile "old-k8s-version-691094"
	I1119 02:34:11.807914  324696 addons.go:239] Setting addon metrics-server=true in "old-k8s-version-691094"
	W1119 02:34:11.807923  324696 addons.go:248] addon metrics-server should already be in state true
	I1119 02:34:11.807946  324696 cli_runner.go:164] Run: docker container inspect old-k8s-version-691094 --format={{.State.Status}}
	I1119 02:34:11.807975  324696 host.go:66] Checking if "old-k8s-version-691094" exists ...
	I1119 02:34:11.808120  324696 addons.go:70] Setting dashboard=true in profile "old-k8s-version-691094"
	I1119 02:34:11.808138  324696 addons.go:239] Setting addon dashboard=true in "old-k8s-version-691094"
	W1119 02:34:11.808146  324696 addons.go:248] addon dashboard should already be in state true
	I1119 02:34:11.808173  324696 host.go:66] Checking if "old-k8s-version-691094" exists ...
	I1119 02:34:11.808693  324696 cli_runner.go:164] Run: docker container inspect old-k8s-version-691094 --format={{.State.Status}}
	I1119 02:34:11.808736  324696 cli_runner.go:164] Run: docker container inspect old-k8s-version-691094 --format={{.State.Status}}
	I1119 02:34:11.812479  324696 out.go:179] * Verifying Kubernetes components...
	I1119 02:34:11.814892  324696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:34:11.844361  324696 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1119 02:34:11.844722  324696 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:34:11.845944  324696 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1119 02:34:11.845969  324696 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1119 02:34:11.846023  324696 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:34:11.846050  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:11.846051  324696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:34:11.846155  324696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
	I1119 02:34:11.852414  324696 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-691094"
	W1119 02:34:11.852444  324696 addons.go:248] addon default-storageclass should already be in state true
	I1119 02:34:11.852476  324696 host.go:66] Checking if "old-k8s-version-691094" exists ...
	I1119 02:34:11.852959  324696 cli_runner.go:164] Run: docker container inspect old-k8s-version-691094 --format={{.State.Status}}
	I1119 02:34:11.867693  324696 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 02:34:11.869830  324696 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	ab00038d157e9       56cc512116c8f       10 seconds ago      Running             busybox                   0                   59ab2bcb83dd0       busybox                                      default
	efc94048807eb       52546a367cc9e       15 seconds ago      Running             coredns                   0                   4de395773869c       coredns-66bc5c9577-zjkgg                     kube-system
	07248d3fa7700       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   d6f91d217e6c3       storage-provisioner                          kube-system
	627aeabcbd8b9       409467f978b4a       27 seconds ago      Running             kindnet-cni               0                   4d28a48ddf233       kindnet-rf6v9                                kube-system
	6a6b45ccc3386       fc25172553d79       27 seconds ago      Running             kube-proxy                0                   7f4f8e38d760a       kube-proxy-v65n7                             kube-system
	c41f3c5163fef       5f1f5298c888d       37 seconds ago      Running             etcd                      0                   f1bc2fcd4c787       etcd-embed-certs-168452                      kube-system
	429108bffb6c2       c3994bc696102       37 seconds ago      Running             kube-apiserver            0                   855b5a64cf042       kube-apiserver-embed-certs-168452            kube-system
	45e16dd5855dc       c80c8dbafe7dd       37 seconds ago      Running             kube-controller-manager   0                   fc384d699a6fe       kube-controller-manager-embed-certs-168452   kube-system
	16ff9f0719734       7dd6aaa1717ab       37 seconds ago      Running             kube-scheduler            0                   ce3809f0d4fbd       kube-scheduler-embed-certs-168452            kube-system
	
	
	==> containerd <==
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.270548100Z" level=info msg="Container efc94048807ebbcf329446ac8949c16a7392055b208364f9e03d0eae632bb378: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.270833598Z" level=info msg="CreateContainer within sandbox \"d6f91d217e6c3552f3a69fa0623e507a5e67784dd771bed97dd1d06401b4bfd3\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"07248d3fa7700df350c76db3b67044ace82560729e01191bc23c841731cd3cfa\""
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.271491612Z" level=info msg="StartContainer for \"07248d3fa7700df350c76db3b67044ace82560729e01191bc23c841731cd3cfa\""
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.272595505Z" level=info msg="connecting to shim 07248d3fa7700df350c76db3b67044ace82560729e01191bc23c841731cd3cfa" address="unix:///run/containerd/s/41d78d767206b328aff4597f7c602e19da062a2cdbdc7b7e3a7ceb19b6896fef" protocol=ttrpc version=3
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.276910603Z" level=info msg="CreateContainer within sandbox \"4de395773869cc7dfa7289a9a9f472e975bfb3e3d6314b1e46a67f92b2934540\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"efc94048807ebbcf329446ac8949c16a7392055b208364f9e03d0eae632bb378\""
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.277542724Z" level=info msg="StartContainer for \"efc94048807ebbcf329446ac8949c16a7392055b208364f9e03d0eae632bb378\""
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.278513534Z" level=info msg="connecting to shim efc94048807ebbcf329446ac8949c16a7392055b208364f9e03d0eae632bb378" address="unix:///run/containerd/s/44ebb2baff15a7c6e6afe805c3f620ba2ac710b53a58cf2d4b044655f2f3e3b4" protocol=ttrpc version=3
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.337834277Z" level=info msg="StartContainer for \"07248d3fa7700df350c76db3b67044ace82560729e01191bc23c841731cd3cfa\" returns successfully"
	Nov 19 02:33:59 embed-certs-168452 containerd[663]: time="2025-11-19T02:33:59.345187757Z" level=info msg="StartContainer for \"efc94048807ebbcf329446ac8949c16a7392055b208364f9e03d0eae632bb378\" returns successfully"
	Nov 19 02:34:02 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:02.161091209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:21d4a418-fd63-4ac5-922d-cb793556218b,Namespace:default,Attempt:0,}"
	Nov 19 02:34:02 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:02.196961969Z" level=info msg="connecting to shim 59ab2bcb83dd06387c432c2650e201a9cbd86173d58f8d0db1922ad1ddd3dfdd" address="unix:///run/containerd/s/8503a785494125141260421ffc0ed807f49afa13268713db5864942eddb5e97a" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 02:34:02 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:02.281117752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:21d4a418-fd63-4ac5-922d-cb793556218b,Namespace:default,Attempt:0,} returns sandbox id \"59ab2bcb83dd06387c432c2650e201a9cbd86173d58f8d0db1922ad1ddd3dfdd\""
	Nov 19 02:34:02 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:02.283495058Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.383809866Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.384560370Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396646"
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.385692546Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.387641417Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.387999891Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.104463265s"
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.388046276Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.392730733Z" level=info msg="CreateContainer within sandbox \"59ab2bcb83dd06387c432c2650e201a9cbd86173d58f8d0db1922ad1ddd3dfdd\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.400882222Z" level=info msg="Container ab00038d157e9a07c8bb58bb1fb42e71f81dccc9c10114391264bdc14a97ae8e: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.406789913Z" level=info msg="CreateContainer within sandbox \"59ab2bcb83dd06387c432c2650e201a9cbd86173d58f8d0db1922ad1ddd3dfdd\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"ab00038d157e9a07c8bb58bb1fb42e71f81dccc9c10114391264bdc14a97ae8e\""
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.407430115Z" level=info msg="StartContainer for \"ab00038d157e9a07c8bb58bb1fb42e71f81dccc9c10114391264bdc14a97ae8e\""
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.408276224Z" level=info msg="connecting to shim ab00038d157e9a07c8bb58bb1fb42e71f81dccc9c10114391264bdc14a97ae8e" address="unix:///run/containerd/s/8503a785494125141260421ffc0ed807f49afa13268713db5864942eddb5e97a" protocol=ttrpc version=3
	Nov 19 02:34:04 embed-certs-168452 containerd[663]: time="2025-11-19T02:34:04.468576361Z" level=info msg="StartContainer for \"ab00038d157e9a07c8bb58bb1fb42e71f81dccc9c10114391264bdc14a97ae8e\" returns successfully"
	
	
	==> coredns [efc94048807ebbcf329446ac8949c16a7392055b208364f9e03d0eae632bb378] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43809 - 5129 "HINFO IN 5397253924571860853.286299948829042260. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.067993989s
	
	
	==> describe nodes <==
	Name:               embed-certs-168452
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-168452
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=embed-certs-168452
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_33_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:33:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-168452
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:34:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:34:12 +0000   Wed, 19 Nov 2025 02:33:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:34:12 +0000   Wed, 19 Nov 2025 02:33:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:34:12 +0000   Wed, 19 Nov 2025 02:33:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:34:12 +0000   Wed, 19 Nov 2025 02:33:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-168452
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                ba96682e-cbc3-44e5-a3b6-1fb8a6a2ab97
	  Boot ID:                    fea1659d-b751-4f87-a281-819adf52de2d
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-66bc5c9577-zjkgg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-embed-certs-168452                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-rf6v9                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-embed-certs-168452             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-embed-certs-168452    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-v65n7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-embed-certs-168452             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node embed-certs-168452 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node embed-certs-168452 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node embed-certs-168452 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node embed-certs-168452 event: Registered Node embed-certs-168452 in Controller
	  Normal  NodeReady                17s   kubelet          Node embed-certs-168452 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[Nov19 02:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 74 0c d7 a6 53 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[ +28.680399] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 e9 7c 92 36 13 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[Nov19 02:32] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	[  +4.552839] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +11.086189] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 76 d1 26 7f 3d 08 06
	[  +0.000377] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[  +9.270754] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a2 49 fd 34 51 3b 08 06
	[  +0.000702] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +23.593864] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 86 43 5f 18 4c 08 06
	[  +0.000495] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	
	
	==> etcd [c41f3c5163fefcf60222c4a4a67440987198fb6dfef8b03ce4bb796c1ab758eb] <==
	{"level":"warn","ts":"2025-11-19T02:33:38.765710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.774487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.782592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.790685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.798745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.807234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.815776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.823438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.830931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.839536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.848592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.856527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.862915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.871123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.880013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.886996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.893756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.902133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.909750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.924567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.935010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.956170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.963848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:38.970968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:33:39.018322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59268","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:34:15 up  1:16,  0 user,  load average: 5.67, 4.02, 2.64
	Linux embed-certs-168452 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [627aeabcbd8b9f66bf990c7bdfb13e60420783283ad0163466ec4d8e1e0bd079] <==
	I1119 02:33:48.548662       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:33:48.548906       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1119 02:33:48.549062       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:33:48.549080       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:33:48.549093       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:33:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:33:48.752772       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:33:48.752805       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:33:48.752827       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:33:48.846283       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:33:49.221523       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:33:49.221557       1 metrics.go:72] Registering metrics
	I1119 02:33:49.221651       1 controller.go:711] "Syncing nftables rules"
	I1119 02:33:58.757479       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 02:33:58.757535       1 main.go:301] handling current node
	I1119 02:34:08.754725       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1119 02:34:08.754773       1 main.go:301] handling current node
	
	
	==> kube-apiserver [429108bffb6c2a8fe9a429fb55f970610b4ab0d090e445d686e4bdb4bb295962] <==
	E1119 02:33:39.600810       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1119 02:33:39.647945       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 02:33:39.653715       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:33:39.654591       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 02:33:39.660768       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:33:39.661775       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:33:39.754573       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:33:40.452240       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 02:33:40.456529       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 02:33:40.456548       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:33:40.979152       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:33:41.024712       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:33:41.154801       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 02:33:41.161403       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1119 02:33:41.162608       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:33:41.167235       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:33:41.483687       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:33:41.922051       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:33:41.932654       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 02:33:41.940904       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 02:33:47.284898       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 02:33:47.436098       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 02:33:47.587889       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:33:47.592312       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1119 02:34:12.032965       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:41640: use of closed network connection
	
	
	==> kube-controller-manager [45e16dd5855dc98d25ef7af6cc8b57610efe471ab11301d4a7c5def7b1ccc943] <==
	I1119 02:33:46.469408       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:33:46.469433       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:33:46.469443       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 02:33:46.481172       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 02:33:46.482217       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 02:33:46.482234       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 02:33:46.482260       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 02:33:46.482624       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 02:33:46.482679       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 02:33:46.482689       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 02:33:46.482768       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 02:33:46.483439       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 02:33:46.483449       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 02:33:46.483493       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 02:33:46.483496       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 02:33:46.483539       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 02:33:46.483483       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 02:33:46.483609       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 02:33:46.484945       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 02:33:46.487248       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 02:33:46.489503       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:33:46.489512       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:33:46.499705       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 02:33:46.499758       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:34:01.415907       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6a6b45ccc3386f2598d874f487c473de83d06079544c8fed7813366dd5764001] <==
	I1119 02:33:47.918676       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:33:47.990453       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:33:48.090983       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:33:48.091028       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1119 02:33:48.091137       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:33:48.123636       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:33:48.123873       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:33:48.132056       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:33:48.132667       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:33:48.132698       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:33:48.137997       1 config.go:200] "Starting service config controller"
	I1119 02:33:48.138706       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:33:48.140977       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:33:48.141009       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:33:48.141022       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:33:48.141079       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:33:48.141982       1 config.go:309] "Starting node config controller"
	I1119 02:33:48.142786       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:33:48.142818       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:33:48.243583       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:33:48.243635       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 02:33:48.243639       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [16ff9f07197344243d6fe00befa11b307459273134b9a9d4fdf6f89d375e78a5] <==
	I1119 02:33:39.850444       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:33:39.850770       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 02:33:39.850826       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1119 02:33:39.852347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 02:33:39.853543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:33:39.854344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:33:39.854402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:33:39.854732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 02:33:39.854771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:33:39.854801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:33:39.855055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:33:39.856252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:33:39.856298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 02:33:39.856362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:33:39.856425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:33:39.856487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 02:33:39.856571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 02:33:39.856670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:33:39.856697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:33:39.856705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 02:33:39.856795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:33:39.856914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 02:33:40.739427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:33:40.794085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1119 02:33:41.451337       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:33:42 embed-certs-168452 kubelet[1466]: E1119 02:33:42.789229    1466 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-168452\" already exists" pod="kube-system/etcd-embed-certs-168452"
	Nov 19 02:33:42 embed-certs-168452 kubelet[1466]: I1119 02:33:42.800678    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-168452" podStartSLOduration=1.8006546079999999 podStartE2EDuration="1.800654608s" podCreationTimestamp="2025-11-19 02:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:42.800558386 +0000 UTC m=+1.128773561" watchObservedRunningTime="2025-11-19 02:33:42.800654608 +0000 UTC m=+1.128869741"
	Nov 19 02:33:42 embed-certs-168452 kubelet[1466]: I1119 02:33:42.809536    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-168452" podStartSLOduration=1.8095134750000001 podStartE2EDuration="1.809513475s" podCreationTimestamp="2025-11-19 02:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:42.809274996 +0000 UTC m=+1.137490149" watchObservedRunningTime="2025-11-19 02:33:42.809513475 +0000 UTC m=+1.137728613"
	Nov 19 02:33:42 embed-certs-168452 kubelet[1466]: I1119 02:33:42.818091    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-168452" podStartSLOduration=1.818068776 podStartE2EDuration="1.818068776s" podCreationTimestamp="2025-11-19 02:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:42.817975386 +0000 UTC m=+1.146190539" watchObservedRunningTime="2025-11-19 02:33:42.818068776 +0000 UTC m=+1.146283929"
	Nov 19 02:33:42 embed-certs-168452 kubelet[1466]: I1119 02:33:42.827716    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-168452" podStartSLOduration=1.8276958730000001 podStartE2EDuration="1.827695873s" podCreationTimestamp="2025-11-19 02:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:42.827681062 +0000 UTC m=+1.155896217" watchObservedRunningTime="2025-11-19 02:33:42.827695873 +0000 UTC m=+1.155911027"
	Nov 19 02:33:46 embed-certs-168452 kubelet[1466]: I1119 02:33:46.508359    1466 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 02:33:46 embed-certs-168452 kubelet[1466]: I1119 02:33:46.509179    1466 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 02:33:47 embed-certs-168452 kubelet[1466]: I1119 02:33:47.381223    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e29d839-0594-41f7-bfd8-1f9ab66b4c86-xtables-lock\") pod \"kindnet-rf6v9\" (UID: \"6e29d839-0594-41f7-bfd8-1f9ab66b4c86\") " pod="kube-system/kindnet-rf6v9"
	Nov 19 02:33:47 embed-certs-168452 kubelet[1466]: I1119 02:33:47.381397    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e29d839-0594-41f7-bfd8-1f9ab66b4c86-lib-modules\") pod \"kindnet-rf6v9\" (UID: \"6e29d839-0594-41f7-bfd8-1f9ab66b4c86\") " pod="kube-system/kindnet-rf6v9"
	Nov 19 02:33:47 embed-certs-168452 kubelet[1466]: I1119 02:33:47.381443    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edc341f0-decd-4b30-a13d-a730cb8fc47d-xtables-lock\") pod \"kube-proxy-v65n7\" (UID: \"edc341f0-decd-4b30-a13d-a730cb8fc47d\") " pod="kube-system/kube-proxy-v65n7"
	Nov 19 02:33:47 embed-certs-168452 kubelet[1466]: I1119 02:33:47.381468    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edc341f0-decd-4b30-a13d-a730cb8fc47d-lib-modules\") pod \"kube-proxy-v65n7\" (UID: \"edc341f0-decd-4b30-a13d-a730cb8fc47d\") " pod="kube-system/kube-proxy-v65n7"
	Nov 19 02:33:47 embed-certs-168452 kubelet[1466]: I1119 02:33:47.381487    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6e29d839-0594-41f7-bfd8-1f9ab66b4c86-cni-cfg\") pod \"kindnet-rf6v9\" (UID: \"6e29d839-0594-41f7-bfd8-1f9ab66b4c86\") " pod="kube-system/kindnet-rf6v9"
	Nov 19 02:33:47 embed-certs-168452 kubelet[1466]: I1119 02:33:47.381525    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/edc341f0-decd-4b30-a13d-a730cb8fc47d-kube-proxy\") pod \"kube-proxy-v65n7\" (UID: \"edc341f0-decd-4b30-a13d-a730cb8fc47d\") " pod="kube-system/kube-proxy-v65n7"
	Nov 19 02:33:47 embed-certs-168452 kubelet[1466]: I1119 02:33:47.381550    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tzcs\" (UniqueName: \"kubernetes.io/projected/6e29d839-0594-41f7-bfd8-1f9ab66b4c86-kube-api-access-5tzcs\") pod \"kindnet-rf6v9\" (UID: \"6e29d839-0594-41f7-bfd8-1f9ab66b4c86\") " pod="kube-system/kindnet-rf6v9"
	Nov 19 02:33:47 embed-certs-168452 kubelet[1466]: I1119 02:33:47.381622    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9762k\" (UniqueName: \"kubernetes.io/projected/edc341f0-decd-4b30-a13d-a730cb8fc47d-kube-api-access-9762k\") pod \"kube-proxy-v65n7\" (UID: \"edc341f0-decd-4b30-a13d-a730cb8fc47d\") " pod="kube-system/kube-proxy-v65n7"
	Nov 19 02:33:48 embed-certs-168452 kubelet[1466]: I1119 02:33:48.807341    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v65n7" podStartSLOduration=1.807314915 podStartE2EDuration="1.807314915s" podCreationTimestamp="2025-11-19 02:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:48.806589928 +0000 UTC m=+7.134805081" watchObservedRunningTime="2025-11-19 02:33:48.807314915 +0000 UTC m=+7.135530067"
	Nov 19 02:33:48 embed-certs-168452 kubelet[1466]: I1119 02:33:48.821123    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rf6v9" podStartSLOduration=1.821100286 podStartE2EDuration="1.821100286s" podCreationTimestamp="2025-11-19 02:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:48.820893573 +0000 UTC m=+7.149108727" watchObservedRunningTime="2025-11-19 02:33:48.821100286 +0000 UTC m=+7.149315451"
	Nov 19 02:33:58 embed-certs-168452 kubelet[1466]: I1119 02:33:58.798002    1466 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 02:33:58 embed-certs-168452 kubelet[1466]: I1119 02:33:58.870226    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3-config-volume\") pod \"coredns-66bc5c9577-zjkgg\" (UID: \"5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3\") " pod="kube-system/coredns-66bc5c9577-zjkgg"
	Nov 19 02:33:58 embed-certs-168452 kubelet[1466]: I1119 02:33:58.870304    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thllq\" (UniqueName: \"kubernetes.io/projected/5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3-kube-api-access-thllq\") pod \"coredns-66bc5c9577-zjkgg\" (UID: \"5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3\") " pod="kube-system/coredns-66bc5c9577-zjkgg"
	Nov 19 02:33:58 embed-certs-168452 kubelet[1466]: I1119 02:33:58.870338    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/eebce997-029a-4da2-b6cd-bb0ff195ebbe-tmp\") pod \"storage-provisioner\" (UID: \"eebce997-029a-4da2-b6cd-bb0ff195ebbe\") " pod="kube-system/storage-provisioner"
	Nov 19 02:33:58 embed-certs-168452 kubelet[1466]: I1119 02:33:58.870731    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dslfc\" (UniqueName: \"kubernetes.io/projected/eebce997-029a-4da2-b6cd-bb0ff195ebbe-kube-api-access-dslfc\") pod \"storage-provisioner\" (UID: \"eebce997-029a-4da2-b6cd-bb0ff195ebbe\") " pod="kube-system/storage-provisioner"
	Nov 19 02:33:59 embed-certs-168452 kubelet[1466]: I1119 02:33:59.831649    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zjkgg" podStartSLOduration=12.831625567 podStartE2EDuration="12.831625567s" podCreationTimestamp="2025-11-19 02:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:59.831400228 +0000 UTC m=+18.159615402" watchObservedRunningTime="2025-11-19 02:33:59.831625567 +0000 UTC m=+18.159840720"
	Nov 19 02:33:59 embed-certs-168452 kubelet[1466]: I1119 02:33:59.853960    1466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.853935943 podStartE2EDuration="11.853935943s" podCreationTimestamp="2025-11-19 02:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:33:59.853561665 +0000 UTC m=+18.181776818" watchObservedRunningTime="2025-11-19 02:33:59.853935943 +0000 UTC m=+18.182151096"
	Nov 19 02:34:01 embed-certs-168452 kubelet[1466]: I1119 02:34:01.889306    1466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krtnz\" (UniqueName: \"kubernetes.io/projected/21d4a418-fd63-4ac5-922d-cb793556218b-kube-api-access-krtnz\") pod \"busybox\" (UID: \"21d4a418-fd63-4ac5-922d-cb793556218b\") " pod="default/busybox"
	
	
	==> storage-provisioner [07248d3fa7700df350c76db3b67044ace82560729e01191bc23c841731cd3cfa] <==
	I1119 02:33:59.366484       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 02:33:59.370339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:59.376043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:33:59.376221       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:33:59.376292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4f3365e9-4f71-41d4-a675-26dba5ec0200", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-168452_382dea49-21ad-4002-8312-0e31e936f03e became leader
	I1119 02:33:59.376507       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-168452_382dea49-21ad-4002-8312-0e31e936f03e!
	W1119 02:33:59.379410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:33:59.384523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:33:59.476976       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-168452_382dea49-21ad-4002-8312-0e31e936f03e!
	W1119 02:34:01.388541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:01.394432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:03.399022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:03.403255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:05.407192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:05.411112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:07.414502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:07.418234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:09.421803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:09.428291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:11.431520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:11.435960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:13.439701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:13.444437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:15.448322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:34:15.453488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-168452 -n embed-certs-168452
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-168452 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (14.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-543625 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2c7a1e56-5397-4855-a23a-6fee9e7c0a32] Pending
helpers_test.go:352: "busybox" [2c7a1e56-5397-4855-a23a-6fee9e7c0a32] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1119 02:35:54.687572   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/calico-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:35:54.694970   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kindnet-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [2c7a1e56-5397-4855-a23a-6fee9e7c0a32] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00372157s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-543625 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-543625
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-543625:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fd06141a46e5e66ab1beae58a100efdf29e826e0d2730f65161dc84d760a8ffd",
	        "Created": "2025-11-19T02:35:14.844697606Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 338070,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:35:14.879624025Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/fd06141a46e5e66ab1beae58a100efdf29e826e0d2730f65161dc84d760a8ffd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fd06141a46e5e66ab1beae58a100efdf29e826e0d2730f65161dc84d760a8ffd/hostname",
	        "HostsPath": "/var/lib/docker/containers/fd06141a46e5e66ab1beae58a100efdf29e826e0d2730f65161dc84d760a8ffd/hosts",
	        "LogPath": "/var/lib/docker/containers/fd06141a46e5e66ab1beae58a100efdf29e826e0d2730f65161dc84d760a8ffd/fd06141a46e5e66ab1beae58a100efdf29e826e0d2730f65161dc84d760a8ffd-json.log",
	        "Name": "/default-k8s-diff-port-543625",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-543625:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-543625",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fd06141a46e5e66ab1beae58a100efdf29e826e0d2730f65161dc84d760a8ffd",
	                "LowerDir": "/var/lib/docker/overlay2/d8992b7ca8b4c201f666f935519d4aa8313a6165602d1554fa18ba5ed871d2b9-init/diff:/var/lib/docker/overlay2/de7938e6a920c133c8c6b988444cfbf6706fdc6982445229ca70e2488a725edb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8992b7ca8b4c201f666f935519d4aa8313a6165602d1554fa18ba5ed871d2b9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8992b7ca8b4c201f666f935519d4aa8313a6165602d1554fa18ba5ed871d2b9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8992b7ca8b4c201f666f935519d4aa8313a6165602d1554fa18ba5ed871d2b9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-543625",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-543625/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-543625",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-543625",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-543625",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fc44185c184b24d2cd8804a0e72b24c36ea5710b5b04bb7958b94cdfd71e04c7",
	            "SandboxKey": "/var/run/docker/netns/fc44185c184b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-543625": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4119a42ae4484eab7e0723b03a220f641087de3f84e75c47464b02779c15ff0f",
	                    "EndpointID": "339755520b9118d219dbffcae52ea998401518b0b7ff319fe9bbc3f084e309e5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "ce:15:97:7f:d5:67",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-543625",
	                        "fd06141a46e5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-543625 -n default-k8s-diff-port-543625
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-543625 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-543625 logs -n 25: (1.026347707s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ unpause │ -p old-k8s-version-691094 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-691094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p old-k8s-version-691094                                                                                                                                                                                                                           │ old-k8s-version-691094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p old-k8s-version-691094                                                                                                                                                                                                                           │ old-k8s-version-691094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p disable-driver-mounts-433931                                                                                                                                                                                                                     │ disable-driver-mounts-433931 │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ start   │ -p default-k8s-diff-port-543625 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-543625 │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ image   │ no-preload-483142 image list --format=json                                                                                                                                                                                                          │ no-preload-483142            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ pause   │ -p no-preload-483142 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-483142            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p kubernetes-upgrade-896338                                                                                                                                                                                                                        │ kubernetes-upgrade-896338    │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ unpause │ -p no-preload-483142 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-483142            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p no-preload-483142                                                                                                                                                                                                                                │ no-preload-483142            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ start   │ -p newest-cni-239505 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p no-preload-483142                                                                                                                                                                                                                                │ no-preload-483142            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ image   │ embed-certs-168452 image list --format=json                                                                                                                                                                                                         │ embed-certs-168452           │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ pause   │ -p embed-certs-168452 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-168452           │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ unpause │ -p embed-certs-168452 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-168452           │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p embed-certs-168452                                                                                                                                                                                                                               │ embed-certs-168452           │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p embed-certs-168452                                                                                                                                                                                                                               │ embed-certs-168452           │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ addons  │ enable metrics-server -p newest-cni-239505 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ stop    │ -p newest-cni-239505 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ addons  │ enable dashboard -p newest-cni-239505 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ start   │ -p newest-cni-239505 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ image   │ newest-cni-239505 image list --format=json                                                                                                                                                                                                          │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ pause   │ -p newest-cni-239505 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ unpause │ -p newest-cni-239505 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p newest-cni-239505                                                                                                                                                                                                                                │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:36 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:35:47
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:35:47.400837  347381 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:35:47.400970  347381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:35:47.400984  347381 out.go:374] Setting ErrFile to fd 2...
	I1119 02:35:47.400989  347381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:35:47.401216  347381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:35:47.401739  347381 out.go:368] Setting JSON to false
	I1119 02:35:47.402804  347381 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4687,"bootTime":1763515060,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:35:47.402900  347381 start.go:143] virtualization: kvm guest
	I1119 02:35:47.404818  347381 out.go:179] * [newest-cni-239505] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:35:47.405933  347381 notify.go:221] Checking for updates...
	I1119 02:35:47.405966  347381 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:35:47.407382  347381 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:35:47.408865  347381 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:35:47.410079  347381 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	I1119 02:35:47.411263  347381 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:35:47.412491  347381 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:35:47.414125  347381 config.go:182] Loaded profile config "newest-cni-239505": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:35:47.414597  347381 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:35:47.439704  347381 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:35:47.439787  347381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:35:47.497896  347381 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-19 02:35:47.48750667 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:35:47.498059  347381 docker.go:319] overlay module found
	I1119 02:35:47.499650  347381 out.go:179] * Using the docker driver based on existing profile
	I1119 02:35:47.500795  347381 start.go:309] selected driver: docker
	I1119 02:35:47.500811  347381 start.go:930] validating driver "docker" against &{Name:newest-cni-239505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-239505 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:35:47.500922  347381 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:35:47.501645  347381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:35:47.559773  347381 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-19 02:35:47.550467875 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:35:47.560039  347381 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 02:35:47.560072  347381 cni.go:84] Creating CNI manager for ""
	I1119 02:35:47.560118  347381 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:35:47.560156  347381 start.go:353] cluster config:
	{Name:newest-cni-239505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-239505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:35:47.562111  347381 out.go:179] * Starting "newest-cni-239505" primary control-plane node in "newest-cni-239505" cluster
	I1119 02:35:47.563255  347381 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 02:35:47.564636  347381 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:35:47.565884  347381 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:35:47.565918  347381 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1119 02:35:47.565925  347381 cache.go:65] Caching tarball of preloaded images
	I1119 02:35:47.565998  347381 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:35:47.566042  347381 preload.go:238] Found /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1119 02:35:47.566053  347381 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 02:35:47.566186  347381 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/config.json ...
	I1119 02:35:47.587344  347381 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:35:47.587383  347381 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:35:47.587402  347381 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:35:47.587428  347381 start.go:360] acquireMachinesLock for newest-cni-239505: {Name:mke53f1011bae5762647d8cf2de4903cc4de19ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:35:47.587481  347381 start.go:364] duration metric: took 35.624µs to acquireMachinesLock for "newest-cni-239505"
	I1119 02:35:47.587498  347381 start.go:96] Skipping create...Using existing machine configuration
	I1119 02:35:47.587503  347381 fix.go:54] fixHost starting: 
	I1119 02:35:47.587701  347381 cli_runner.go:164] Run: docker container inspect newest-cni-239505 --format={{.State.Status}}
	I1119 02:35:47.605970  347381 fix.go:112] recreateIfNeeded on newest-cni-239505: state=Stopped err=<nil>
	W1119 02:35:47.605998  347381 fix.go:138] unexpected machine state, will restart: <nil>
	W1119 02:35:44.919531  337314 node_ready.go:57] node "default-k8s-diff-port-543625" has "Ready":"False" status (will retry)
	W1119 02:35:46.919687  337314 node_ready.go:57] node "default-k8s-diff-port-543625" has "Ready":"False" status (will retry)
	W1119 02:35:48.919750  337314 node_ready.go:57] node "default-k8s-diff-port-543625" has "Ready":"False" status (will retry)
	I1119 02:35:49.420183  337314 node_ready.go:49] node "default-k8s-diff-port-543625" is "Ready"
	I1119 02:35:49.420212  337314 node_ready.go:38] duration metric: took 11.004235265s for node "default-k8s-diff-port-543625" to be "Ready" ...
	I1119 02:35:49.420228  337314 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:35:49.420271  337314 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:35:49.432830  337314 api_server.go:72] duration metric: took 11.337365597s to wait for apiserver process to appear ...
	I1119 02:35:49.432857  337314 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:35:49.432874  337314 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1119 02:35:49.438201  337314 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1119 02:35:49.439299  337314 api_server.go:141] control plane version: v1.34.1
	I1119 02:35:49.439331  337314 api_server.go:131] duration metric: took 6.466118ms to wait for apiserver health ...
	I1119 02:35:49.439342  337314 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:35:49.442756  337314 system_pods.go:59] 8 kube-system pods found
	I1119 02:35:49.442794  337314 system_pods.go:61] "coredns-66bc5c9577-8tnd6" [01ac50b7-4308-4544-8340-ae41c3dd2992] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:35:49.442801  337314 system_pods.go:61] "etcd-default-k8s-diff-port-543625" [be16351c-840a-4407-8dda-00b2d1adc11e] Running
	I1119 02:35:49.442807  337314 system_pods.go:61] "kindnet-ddmgw" [88b25c10-b469-410d-8418-e0ceaa17a8ea] Running
	I1119 02:35:49.442811  337314 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-543625" [2c449f2a-2ed8-42d3-a05b-6a3723cd057d] Running
	I1119 02:35:49.442815  337314 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-543625" [a2c42093-b10e-432b-b083-b487eb49e46c] Running
	I1119 02:35:49.442818  337314 system_pods.go:61] "kube-proxy-lk5qw" [36f0dd8e-7095-4d43-b4f3-4a4b11b6f852] Running
	I1119 02:35:49.442822  337314 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-543625" [7b32234f-d8c9-4e4d-bc07-e725a2b14c3e] Running
	I1119 02:35:49.442826  337314 system_pods.go:61] "storage-provisioner" [b72767a4-d2fe-420c-9cd5-7877ae681fd5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:35:49.442835  337314 system_pods.go:74] duration metric: took 3.486977ms to wait for pod list to return data ...
	I1119 02:35:49.442843  337314 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:35:49.445443  337314 default_sa.go:45] found service account: "default"
	I1119 02:35:49.445474  337314 default_sa.go:55] duration metric: took 2.616799ms for default service account to be created ...
	I1119 02:35:49.445483  337314 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:35:49.448952  337314 system_pods.go:86] 8 kube-system pods found
	I1119 02:35:49.448989  337314 system_pods.go:89] "coredns-66bc5c9577-8tnd6" [01ac50b7-4308-4544-8340-ae41c3dd2992] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:35:49.448998  337314 system_pods.go:89] "etcd-default-k8s-diff-port-543625" [be16351c-840a-4407-8dda-00b2d1adc11e] Running
	I1119 02:35:49.449008  337314 system_pods.go:89] "kindnet-ddmgw" [88b25c10-b469-410d-8418-e0ceaa17a8ea] Running
	I1119 02:35:49.449015  337314 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-543625" [2c449f2a-2ed8-42d3-a05b-6a3723cd057d] Running
	I1119 02:35:49.449022  337314 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-543625" [a2c42093-b10e-432b-b083-b487eb49e46c] Running
	I1119 02:35:49.449032  337314 system_pods.go:89] "kube-proxy-lk5qw" [36f0dd8e-7095-4d43-b4f3-4a4b11b6f852] Running
	I1119 02:35:49.449037  337314 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-543625" [7b32234f-d8c9-4e4d-bc07-e725a2b14c3e] Running
	I1119 02:35:49.449044  337314 system_pods.go:89] "storage-provisioner" [b72767a4-d2fe-420c-9cd5-7877ae681fd5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:35:49.449075  337314 retry.go:31] will retry after 292.278482ms: missing components: kube-dns
	I1119 02:35:49.746679  337314 system_pods.go:86] 8 kube-system pods found
	I1119 02:35:49.746721  337314 system_pods.go:89] "coredns-66bc5c9577-8tnd6" [01ac50b7-4308-4544-8340-ae41c3dd2992] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:35:49.746730  337314 system_pods.go:89] "etcd-default-k8s-diff-port-543625" [be16351c-840a-4407-8dda-00b2d1adc11e] Running
	I1119 02:35:49.746739  337314 system_pods.go:89] "kindnet-ddmgw" [88b25c10-b469-410d-8418-e0ceaa17a8ea] Running
	I1119 02:35:49.746743  337314 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-543625" [2c449f2a-2ed8-42d3-a05b-6a3723cd057d] Running
	I1119 02:35:49.746746  337314 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-543625" [a2c42093-b10e-432b-b083-b487eb49e46c] Running
	I1119 02:35:49.746751  337314 system_pods.go:89] "kube-proxy-lk5qw" [36f0dd8e-7095-4d43-b4f3-4a4b11b6f852] Running
	I1119 02:35:49.746756  337314 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-543625" [7b32234f-d8c9-4e4d-bc07-e725a2b14c3e] Running
	I1119 02:35:49.746763  337314 system_pods.go:89] "storage-provisioner" [b72767a4-d2fe-420c-9cd5-7877ae681fd5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:35:49.746782  337314 retry.go:31] will retry after 378.870289ms: missing components: kube-dns
	I1119 02:35:50.130700  337314 system_pods.go:86] 8 kube-system pods found
	I1119 02:35:50.130736  337314 system_pods.go:89] "coredns-66bc5c9577-8tnd6" [01ac50b7-4308-4544-8340-ae41c3dd2992] Running
	I1119 02:35:50.130745  337314 system_pods.go:89] "etcd-default-k8s-diff-port-543625" [be16351c-840a-4407-8dda-00b2d1adc11e] Running
	I1119 02:35:50.130752  337314 system_pods.go:89] "kindnet-ddmgw" [88b25c10-b469-410d-8418-e0ceaa17a8ea] Running
	I1119 02:35:50.130758  337314 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-543625" [2c449f2a-2ed8-42d3-a05b-6a3723cd057d] Running
	I1119 02:35:50.130761  337314 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-543625" [a2c42093-b10e-432b-b083-b487eb49e46c] Running
	I1119 02:35:50.130765  337314 system_pods.go:89] "kube-proxy-lk5qw" [36f0dd8e-7095-4d43-b4f3-4a4b11b6f852] Running
	I1119 02:35:50.130778  337314 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-543625" [7b32234f-d8c9-4e4d-bc07-e725a2b14c3e] Running
	I1119 02:35:50.130784  337314 system_pods.go:89] "storage-provisioner" [b72767a4-d2fe-420c-9cd5-7877ae681fd5] Running
	I1119 02:35:50.130795  337314 system_pods.go:126] duration metric: took 685.305524ms to wait for k8s-apps to be running ...
	I1119 02:35:50.130809  337314 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:35:50.130857  337314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:35:50.145231  337314 system_svc.go:56] duration metric: took 14.411439ms WaitForService to wait for kubelet
	I1119 02:35:50.145264  337314 kubeadm.go:587] duration metric: took 12.049805392s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:35:50.145284  337314 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:35:50.148998  337314 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:35:50.149034  337314 node_conditions.go:123] node cpu capacity is 8
	I1119 02:35:50.149050  337314 node_conditions.go:105] duration metric: took 3.75978ms to run NodePressure ...
	I1119 02:35:50.149064  337314 start.go:242] waiting for startup goroutines ...
	I1119 02:35:50.149074  337314 start.go:247] waiting for cluster config update ...
	I1119 02:35:50.149085  337314 start.go:256] writing updated cluster config ...
	I1119 02:35:50.149449  337314 ssh_runner.go:195] Run: rm -f paused
	I1119 02:35:50.153799  337314 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:35:50.157270  337314 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8tnd6" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:50.161954  337314 pod_ready.go:94] pod "coredns-66bc5c9577-8tnd6" is "Ready"
	I1119 02:35:50.161982  337314 pod_ready.go:86] duration metric: took 4.688734ms for pod "coredns-66bc5c9577-8tnd6" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:50.163888  337314 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-543625" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:50.167504  337314 pod_ready.go:94] pod "etcd-default-k8s-diff-port-543625" is "Ready"
	I1119 02:35:50.167523  337314 pod_ready.go:86] duration metric: took 3.613752ms for pod "etcd-default-k8s-diff-port-543625" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:50.169462  337314 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-543625" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:50.173228  337314 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-543625" is "Ready"
	I1119 02:35:50.173249  337314 pod_ready.go:86] duration metric: took 3.764954ms for pod "kube-apiserver-default-k8s-diff-port-543625" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:50.174980  337314 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-543625" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:50.557758  337314 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-543625" is "Ready"
	I1119 02:35:50.557782  337314 pod_ready.go:86] duration metric: took 382.784309ms for pod "kube-controller-manager-default-k8s-diff-port-543625" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:50.758083  337314 pod_ready.go:83] waiting for pod "kube-proxy-lk5qw" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:51.158326  337314 pod_ready.go:94] pod "kube-proxy-lk5qw" is "Ready"
	I1119 02:35:51.158358  337314 pod_ready.go:86] duration metric: took 400.25163ms for pod "kube-proxy-lk5qw" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:51.358261  337314 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-543625" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:51.757991  337314 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-543625" is "Ready"
	I1119 02:35:51.758016  337314 pod_ready.go:86] duration metric: took 399.729831ms for pod "kube-scheduler-default-k8s-diff-port-543625" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:51.758027  337314 pod_ready.go:40] duration metric: took 1.604193259s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:35:51.801860  337314 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:35:51.804130  337314 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-543625" cluster and "default" namespace by default
	I1119 02:35:47.607672  347381 out.go:252] * Restarting existing docker container for "newest-cni-239505" ...
	I1119 02:35:47.607739  347381 cli_runner.go:164] Run: docker start newest-cni-239505
	I1119 02:35:47.890163  347381 cli_runner.go:164] Run: docker container inspect newest-cni-239505 --format={{.State.Status}}
	I1119 02:35:47.909657  347381 kic.go:430] container "newest-cni-239505" state is running.
	I1119 02:35:47.910050  347381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-239505
	I1119 02:35:47.930867  347381 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/config.json ...
	I1119 02:35:47.931124  347381 machine.go:94] provisionDockerMachine start ...
	I1119 02:35:47.931201  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:47.950868  347381 main.go:143] libmachine: Using SSH client type: native
	I1119 02:35:47.951147  347381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1119 02:35:47.951170  347381 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:35:47.951759  347381 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41488->127.0.0.1:33135: read: connection reset by peer
	I1119 02:35:51.087409  347381 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-239505
	
	I1119 02:35:51.087442  347381 ubuntu.go:182] provisioning hostname "newest-cni-239505"
	I1119 02:35:51.087503  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:51.106329  347381 main.go:143] libmachine: Using SSH client type: native
	I1119 02:35:51.106588  347381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1119 02:35:51.106604  347381 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-239505 && echo "newest-cni-239505" | sudo tee /etc/hostname
	I1119 02:35:51.250055  347381 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-239505
	
	I1119 02:35:51.250142  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:51.269426  347381 main.go:143] libmachine: Using SSH client type: native
	I1119 02:35:51.269653  347381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1119 02:35:51.269675  347381 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-239505' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-239505/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-239505' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:35:51.403348  347381 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:35:51.403404  347381 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11107/.minikube}
	I1119 02:35:51.403429  347381 ubuntu.go:190] setting up certificates
	I1119 02:35:51.403448  347381 provision.go:84] configureAuth start
	I1119 02:35:51.403493  347381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-239505
	I1119 02:35:51.423151  347381 provision.go:143] copyHostCerts
	I1119 02:35:51.423228  347381 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem, removing ...
	I1119 02:35:51.423248  347381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem
	I1119 02:35:51.423399  347381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem (1082 bytes)
	I1119 02:35:51.423546  347381 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem, removing ...
	I1119 02:35:51.423560  347381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem
	I1119 02:35:51.423611  347381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem (1123 bytes)
	I1119 02:35:51.423708  347381 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem, removing ...
	I1119 02:35:51.423719  347381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem
	I1119 02:35:51.423753  347381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem (1675 bytes)
	I1119 02:35:51.423824  347381 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem org=jenkins.newest-cni-239505 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-239505]
	I1119 02:35:51.555080  347381 provision.go:177] copyRemoteCerts
	I1119 02:35:51.555155  347381 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:35:51.555200  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:51.574477  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:51.670101  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:35:51.688576  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 02:35:51.705998  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:35:51.723963  347381 provision.go:87] duration metric: took 320.502408ms to configureAuth
	I1119 02:35:51.723996  347381 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:35:51.724225  347381 config.go:182] Loaded profile config "newest-cni-239505": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:35:51.724241  347381 machine.go:97] duration metric: took 3.7931035s to provisionDockerMachine
	I1119 02:35:51.724251  347381 start.go:293] postStartSetup for "newest-cni-239505" (driver="docker")
	I1119 02:35:51.724263  347381 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:35:51.724318  347381 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:35:51.724401  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:51.742905  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:51.840030  347381 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:35:51.844169  347381 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:35:51.844207  347381 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:35:51.844221  347381 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/addons for local assets ...
	I1119 02:35:51.844290  347381 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/files for local assets ...
	I1119 02:35:51.844448  347381 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem -> 146572.pem in /etc/ssl/certs
	I1119 02:35:51.844571  347381 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:35:51.853823  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:35:51.873010  347381 start.go:296] duration metric: took 148.732465ms for postStartSetup
	I1119 02:35:51.873085  347381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:35:51.873116  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:51.894305  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:51.988416  347381 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:35:51.993832  347381 fix.go:56] duration metric: took 4.406321541s for fixHost
	I1119 02:35:51.993857  347381 start.go:83] releasing machines lock for "newest-cni-239505", held for 4.406364595s
	I1119 02:35:51.993922  347381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-239505
	I1119 02:35:52.012608  347381 ssh_runner.go:195] Run: cat /version.json
	I1119 02:35:52.012653  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:52.012698  347381 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:35:52.012764  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:52.033522  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:52.033946  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:52.181828  347381 ssh_runner.go:195] Run: systemctl --version
	I1119 02:35:52.188860  347381 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:35:52.193697  347381 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:35:52.193761  347381 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:35:52.201832  347381 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 02:35:52.201855  347381 start.go:496] detecting cgroup driver to use...
	I1119 02:35:52.201882  347381 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:35:52.201918  347381 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 02:35:52.218073  347381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 02:35:52.230964  347381 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:35:52.231008  347381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:35:52.246109  347381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:35:52.258818  347381 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:35:52.343216  347381 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:35:52.432854  347381 docker.go:234] disabling docker service ...
	I1119 02:35:52.432916  347381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:35:52.448350  347381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:35:52.461059  347381 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:35:52.538432  347381 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:35:52.617421  347381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:35:52.629879  347381 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:35:52.644238  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 02:35:52.653177  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 02:35:52.662177  347381 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 02:35:52.662268  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 02:35:52.671552  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:35:52.680263  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 02:35:52.689420  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:35:52.698659  347381 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:35:52.707126  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 02:35:52.716477  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 02:35:52.725757  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 02:35:52.734728  347381 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:35:52.742163  347381 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:35:52.749627  347381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:35:52.828250  347381 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 02:35:52.932734  347381 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 02:35:52.932786  347381 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 02:35:52.936971  347381 start.go:564] Will wait 60s for crictl version
	I1119 02:35:52.937019  347381 ssh_runner.go:195] Run: which crictl
	I1119 02:35:52.940531  347381 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:35:52.965452  347381 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 02:35:52.965524  347381 ssh_runner.go:195] Run: containerd --version
	I1119 02:35:52.986594  347381 ssh_runner.go:195] Run: containerd --version
	I1119 02:35:53.009596  347381 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 02:35:53.011151  347381 cli_runner.go:164] Run: docker network inspect newest-cni-239505 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:35:53.030555  347381 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 02:35:53.034715  347381 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:35:53.047172  347381 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 02:35:53.048457  347381 kubeadm.go:884] updating cluster {Name:newest-cni-239505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-239505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:35:53.048602  347381 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:35:53.048674  347381 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:35:53.074705  347381 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:35:53.074723  347381 containerd.go:534] Images already preloaded, skipping extraction
	I1119 02:35:53.074769  347381 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:35:53.100350  347381 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:35:53.100396  347381 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:35:53.100406  347381 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1119 02:35:53.100497  347381 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-239505 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-239505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:35:53.100552  347381 ssh_runner.go:195] Run: sudo crictl info
	I1119 02:35:53.127295  347381 cni.go:84] Creating CNI manager for ""
	I1119 02:35:53.127316  347381 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:35:53.127330  347381 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 02:35:53.127349  347381 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-239505 NodeName:newest-cni-239505 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:35:53.127508  347381 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-239505"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:35:53.127572  347381 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:35:53.135646  347381 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:35:53.135722  347381 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:35:53.143558  347381 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1119 02:35:53.156672  347381 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:35:53.169938  347381 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1119 02:35:53.184132  347381 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:35:53.188213  347381 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:35:53.199678  347381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:35:53.280671  347381 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:35:53.305099  347381 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505 for IP: 192.168.85.2
	I1119 02:35:53.305124  347381 certs.go:195] generating shared ca certs ...
	I1119 02:35:53.305145  347381 certs.go:227] acquiring lock for ca certs: {Name:mk11d6789b2333e17b3937495b501fbcca15c242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:35:53.305300  347381 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key
	I1119 02:35:53.305343  347381 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key
	I1119 02:35:53.305353  347381 certs.go:257] generating profile certs ...
	I1119 02:35:53.305468  347381 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/client.key
	I1119 02:35:53.305518  347381 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/apiserver.key.3b68aa73
	I1119 02:35:53.305553  347381 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/proxy-client.key
	I1119 02:35:53.305671  347381 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem (1338 bytes)
	W1119 02:35:53.305702  347381 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657_empty.pem, impossibly tiny 0 bytes
	I1119 02:35:53.305712  347381 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 02:35:53.305732  347381 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:35:53.305756  347381 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:35:53.305778  347381 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem (1675 bytes)
	I1119 02:35:53.305817  347381 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:35:53.306462  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:35:53.326098  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 02:35:53.344940  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:35:53.363844  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:35:53.387719  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 02:35:53.409627  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:35:53.429288  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:35:53.448417  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 02:35:53.466531  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /usr/share/ca-certificates/146572.pem (1708 bytes)
	I1119 02:35:53.484649  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:35:53.504556  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem --> /usr/share/ca-certificates/14657.pem (1338 bytes)
	I1119 02:35:53.524324  347381 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:35:53.538770  347381 ssh_runner.go:195] Run: openssl version
	I1119 02:35:53.545089  347381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:35:53.554642  347381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:35:53.558994  347381 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:57 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:35:53.559065  347381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:35:53.595031  347381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:35:53.604362  347381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14657.pem && ln -fs /usr/share/ca-certificates/14657.pem /etc/ssl/certs/14657.pem"
	I1119 02:35:53.613125  347381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14657.pem
	I1119 02:35:53.616990  347381 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14657.pem
	I1119 02:35:53.617052  347381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14657.pem
	I1119 02:35:53.651629  347381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14657.pem /etc/ssl/certs/51391683.0"
	I1119 02:35:53.660258  347381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146572.pem && ln -fs /usr/share/ca-certificates/146572.pem /etc/ssl/certs/146572.pem"
	I1119 02:35:53.669226  347381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146572.pem
	I1119 02:35:53.673309  347381 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146572.pem
	I1119 02:35:53.673406  347381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146572.pem
	I1119 02:35:53.708041  347381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146572.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:35:53.716703  347381 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:35:53.721178  347381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 02:35:53.757789  347381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 02:35:53.794527  347381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 02:35:53.837676  347381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 02:35:53.893972  347381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 02:35:53.950806  347381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 02:35:54.004163  347381 kubeadm.go:401] StartCluster: {Name:newest-cni-239505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-239505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:35:54.004290  347381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 02:35:54.004386  347381 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:35:54.048518  347381 cri.go:89] found id: "896e7ba9918c57df7870e60ee2adfd4ca68e4338b6a5c1f04c9a3cb7d1674332"
	I1119 02:35:54.048542  347381 cri.go:89] found id: "f1fae129650bdbd767c99285efd7d9eb755597dfbc088ab6ee9224ecf8ddf166"
	I1119 02:35:54.048548  347381 cri.go:89] found id: "44349b33052492aa2604afff47ff7a4ddcf5219f43a47923fa749dbb520c7b05"
	I1119 02:35:54.048552  347381 cri.go:89] found id: "394de3b4f48c01a85e5b574db877f5c3334a77a0463ddbe582de92b32e2833d7"
	I1119 02:35:54.048556  347381 cri.go:89] found id: "3666964818012148dfe702358f6c5b27f9578c563a1c811a27e8021a2c2ff2eb"
	I1119 02:35:54.048562  347381 cri.go:89] found id: "5793bf39ee28f256f3b97ac85ab00dd5f99741dd895d82536db1f9d3bc949d44"
	I1119 02:35:54.048565  347381 cri.go:89] found id: "df9e67fb6ca6844de2b82323fda742288ea474a4187c8e77dde384e7f2fe1aa6"
	I1119 02:35:54.048578  347381 cri.go:89] found id: "56b78f46816862ab6f488a86861ccb62a8b8fd4aede37d3b21ade91af763aa96"
	I1119 02:35:54.048582  347381 cri.go:89] found id: "9156f446b7b25b9d9d01460e58eda771b3f4a04a1c3d55a14391065cc55a1560"
	I1119 02:35:54.048627  347381 cri.go:89] found id: "5c587b93ffb1ce6e0fb1e971f6c70168e2813290fbb13003dea1688c7063686b"
	I1119 02:35:54.048635  347381 cri.go:89] found id: ""
	I1119 02:35:54.048685  347381 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1119 02:35:54.082340  347381 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"0a5f67efee16c45c483b238e0efa2ed85d84df9112a6f64b6784b2ab24654773","pid":847,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a5f67efee16c45c483b238e0efa2ed85d84df9112a6f64b6784b2ab24654773","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a5f67efee16c45c483b238e0efa2ed85d84df9112a6f64b6784b2ab24654773/rootfs","created":"2025-11-19T02:35:53.889699047Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"0a5f67efee16c45c483b238e0efa2ed85d84df9112a6f64b6784b2ab24654773","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-239505_0587d0fb67ac1bf68b023e756b989c11","io.kubernetes.cri.sand
box-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-239505","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0587d0fb67ac1bf68b023e756b989c11"},"owner":"root"},{"ociVersion":"1.2.1","id":"394de3b4f48c01a85e5b574db877f5c3334a77a0463ddbe582de92b32e2833d7","pid":935,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/394de3b4f48c01a85e5b574db877f5c3334a77a0463ddbe582de92b32e2833d7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/394de3b4f48c01a85e5b574db877f5c3334a77a0463ddbe582de92b32e2833d7/rootfs","created":"2025-11-19T02:35:54.004488733Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"3c1254266e465449e256adf1cd3f71dc93083029bcb62014a050338c640350bf","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-239505","io.kubernet
es.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ce56d7590bb14d4c47c79f7a8212f6f8"},"owner":"root"},{"ociVersion":"1.2.1","id":"3c1254266e465449e256adf1cd3f71dc93083029bcb62014a050338c640350bf","pid":834,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c1254266e465449e256adf1cd3f71dc93083029bcb62014a050338c640350bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c1254266e465449e256adf1cd3f71dc93083029bcb62014a050338c640350bf/rootfs","created":"2025-11-19T02:35:53.88307603Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"3c1254266e465449e256adf1cd3f71dc93083029bcb62014a050338c640350bf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-239505_ce
56d7590bb14d4c47c79f7a8212f6f8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-239505","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ce56d7590bb14d4c47c79f7a8212f6f8"},"owner":"root"},{"ociVersion":"1.2.1","id":"44349b33052492aa2604afff47ff7a4ddcf5219f43a47923fa749dbb520c7b05","pid":947,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/44349b33052492aa2604afff47ff7a4ddcf5219f43a47923fa749dbb520c7b05","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/44349b33052492aa2604afff47ff7a4ddcf5219f43a47923fa749dbb520c7b05/rootfs","created":"2025-11-19T02:35:54.016290745Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"0a5f67efee16c45c483b238e0efa2ed85d84df9112a6f64b6784b2ab24654773","io.kubernetes.c
ri.sandbox-name":"kube-controller-manager-newest-cni-239505","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0587d0fb67ac1bf68b023e756b989c11"},"owner":"root"},{"ociVersion":"1.2.1","id":"896e7ba9918c57df7870e60ee2adfd4ca68e4338b6a5c1f04c9a3cb7d1674332","pid":979,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/896e7ba9918c57df7870e60ee2adfd4ca68e4338b6a5c1f04c9a3cb7d1674332","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/896e7ba9918c57df7870e60ee2adfd4ca68e4338b6a5c1f04c9a3cb7d1674332/rootfs","created":"2025-11-19T02:35:54.023734678Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"bfced899161a0337ae5adf67427b33a3b8f27c734b69eda80fd7b34a527982f3","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-239505","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.san
dbox-uid":"bcf32e34d14ad59cd0ad1ca743424b20"},"owner":"root"},{"ociVersion":"1.2.1","id":"bfced899161a0337ae5adf67427b33a3b8f27c734b69eda80fd7b34a527982f3","pid":871,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfced899161a0337ae5adf67427b33a3b8f27c734b69eda80fd7b34a527982f3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfced899161a0337ae5adf67427b33a3b8f27c734b69eda80fd7b34a527982f3/rootfs","created":"2025-11-19T02:35:53.902079564Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bfced899161a0337ae5adf67427b33a3b8f27c734b69eda80fd7b34a527982f3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-239505_bcf32e34d14ad59cd0ad1ca743424b20","io.kubernetes.cri.sandbox-memory":"0",
"io.kubernetes.cri.sandbox-name":"etcd-newest-cni-239505","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"bcf32e34d14ad59cd0ad1ca743424b20"},"owner":"root"},{"ociVersion":"1.2.1","id":"d8a2b8d0e3fa887b4bde6ebcfa5d21cabbbcde892d351228350666759e15659a","pid":864,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8a2b8d0e3fa887b4bde6ebcfa5d21cabbbcde892d351228350666759e15659a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8a2b8d0e3fa887b4bde6ebcfa5d21cabbbcde892d351228350666759e15659a/rootfs","created":"2025-11-19T02:35:53.895686015Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"d8a2b8d0e3fa887b4bde6ebcfa5d21cabbbcde892d351228350666759e15659a","io.kubernetes.cri.sandbox-log-di
rectory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-239505_fbb8d14132830afcf406a097e2f2b384","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-239505","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fbb8d14132830afcf406a097e2f2b384"},"owner":"root"},{"ociVersion":"1.2.1","id":"f1fae129650bdbd767c99285efd7d9eb755597dfbc088ab6ee9224ecf8ddf166","pid":972,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1fae129650bdbd767c99285efd7d9eb755597dfbc088ab6ee9224ecf8ddf166","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1fae129650bdbd767c99285efd7d9eb755597dfbc088ab6ee9224ecf8ddf166/rootfs","created":"2025-11-19T02:35:54.017922437Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"d8a2b8d0e3fa887b4bde6ebcfa5d2
1cabbbcde892d351228350666759e15659a","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-239505","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fbb8d14132830afcf406a097e2f2b384"},"owner":"root"}]
	I1119 02:35:54.082633  347381 cri.go:126] list returned 8 containers
	I1119 02:35:54.082651  347381 cri.go:129] container: {ID:0a5f67efee16c45c483b238e0efa2ed85d84df9112a6f64b6784b2ab24654773 Status:running}
	I1119 02:35:54.082692  347381 cri.go:131] skipping 0a5f67efee16c45c483b238e0efa2ed85d84df9112a6f64b6784b2ab24654773 - not in ps
	I1119 02:35:54.082717  347381 cri.go:129] container: {ID:394de3b4f48c01a85e5b574db877f5c3334a77a0463ddbe582de92b32e2833d7 Status:running}
	I1119 02:35:54.082730  347381 cri.go:135] skipping {394de3b4f48c01a85e5b574db877f5c3334a77a0463ddbe582de92b32e2833d7 running}: state = "running", want "paused"
	I1119 02:35:54.082750  347381 cri.go:129] container: {ID:3c1254266e465449e256adf1cd3f71dc93083029bcb62014a050338c640350bf Status:running}
	I1119 02:35:54.082757  347381 cri.go:131] skipping 3c1254266e465449e256adf1cd3f71dc93083029bcb62014a050338c640350bf - not in ps
	I1119 02:35:54.082773  347381 cri.go:129] container: {ID:44349b33052492aa2604afff47ff7a4ddcf5219f43a47923fa749dbb520c7b05 Status:running}
	I1119 02:35:54.082798  347381 cri.go:135] skipping {44349b33052492aa2604afff47ff7a4ddcf5219f43a47923fa749dbb520c7b05 running}: state = "running", want "paused"
	I1119 02:35:54.082811  347381 cri.go:129] container: {ID:896e7ba9918c57df7870e60ee2adfd4ca68e4338b6a5c1f04c9a3cb7d1674332 Status:running}
	I1119 02:35:54.082819  347381 cri.go:135] skipping {896e7ba9918c57df7870e60ee2adfd4ca68e4338b6a5c1f04c9a3cb7d1674332 running}: state = "running", want "paused"
	I1119 02:35:54.082830  347381 cri.go:129] container: {ID:bfced899161a0337ae5adf67427b33a3b8f27c734b69eda80fd7b34a527982f3 Status:running}
	I1119 02:35:54.082838  347381 cri.go:131] skipping bfced899161a0337ae5adf67427b33a3b8f27c734b69eda80fd7b34a527982f3 - not in ps
	I1119 02:35:54.082847  347381 cri.go:129] container: {ID:d8a2b8d0e3fa887b4bde6ebcfa5d21cabbbcde892d351228350666759e15659a Status:running}
	I1119 02:35:54.082856  347381 cri.go:131] skipping d8a2b8d0e3fa887b4bde6ebcfa5d21cabbbcde892d351228350666759e15659a - not in ps
	I1119 02:35:54.082876  347381 cri.go:129] container: {ID:f1fae129650bdbd767c99285efd7d9eb755597dfbc088ab6ee9224ecf8ddf166 Status:running}
	I1119 02:35:54.082885  347381 cri.go:135] skipping {f1fae129650bdbd767c99285efd7d9eb755597dfbc088ab6ee9224ecf8ddf166 running}: state = "running", want "paused"
	I1119 02:35:54.083183  347381 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:35:54.094559  347381 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 02:35:54.094587  347381 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 02:35:54.094641  347381 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 02:35:54.105647  347381 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:35:54.106382  347381 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-239505" does not appear in /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:35:54.106738  347381 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-11107/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-239505" cluster setting kubeconfig missing "newest-cni-239505" context setting]
	I1119 02:35:54.107347  347381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:35:54.109335  347381 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 02:35:54.120053  347381 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 02:35:54.120094  347381 kubeadm.go:602] duration metric: took 25.499769ms to restartPrimaryControlPlane
	I1119 02:35:54.120107  347381 kubeadm.go:403] duration metric: took 115.955182ms to StartCluster
	I1119 02:35:54.120125  347381 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:35:54.120200  347381 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:35:54.121433  347381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:35:54.121717  347381 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:35:54.121951  347381 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:35:54.122037  347381 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-239505"
	I1119 02:35:54.122058  347381 addons.go:70] Setting default-storageclass=true in profile "newest-cni-239505"
	I1119 02:35:54.122086  347381 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-239505"
	I1119 02:35:54.122095  347381 addons.go:70] Setting metrics-server=true in profile "newest-cni-239505"
	I1119 02:35:54.122098  347381 addons.go:70] Setting dashboard=true in profile "newest-cni-239505"
	I1119 02:35:54.122109  347381 addons.go:239] Setting addon metrics-server=true in "newest-cni-239505"
	I1119 02:35:54.122116  347381 addons.go:239] Setting addon dashboard=true in "newest-cni-239505"
	W1119 02:35:54.122117  347381 addons.go:248] addon metrics-server should already be in state true
	W1119 02:35:54.122124  347381 addons.go:248] addon dashboard should already be in state true
	I1119 02:35:54.122149  347381 host.go:66] Checking if "newest-cni-239505" exists ...
	I1119 02:35:54.122149  347381 host.go:66] Checking if "newest-cni-239505" exists ...
	I1119 02:35:54.122148  347381 config.go:182] Loaded profile config "newest-cni-239505": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:35:54.122086  347381 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-239505"
	W1119 02:35:54.122284  347381 addons.go:248] addon storage-provisioner should already be in state true
	I1119 02:35:54.122308  347381 host.go:66] Checking if "newest-cni-239505" exists ...
	I1119 02:35:54.122444  347381 cli_runner.go:164] Run: docker container inspect newest-cni-239505 --format={{.State.Status}}
	I1119 02:35:54.122652  347381 cli_runner.go:164] Run: docker container inspect newest-cni-239505 --format={{.State.Status}}
	I1119 02:35:54.122694  347381 cli_runner.go:164] Run: docker container inspect newest-cni-239505 --format={{.State.Status}}
	I1119 02:35:54.122776  347381 cli_runner.go:164] Run: docker container inspect newest-cni-239505 --format={{.State.Status}}
	I1119 02:35:54.124605  347381 out.go:179] * Verifying Kubernetes components...
	I1119 02:35:54.126303  347381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:35:54.152835  347381 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 02:35:54.152844  347381 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:35:54.152882  347381 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1119 02:35:54.154137  347381 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1119 02:35:54.154290  347381 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1119 02:35:54.154358  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:54.154225  347381 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:35:54.154401  347381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:35:54.154433  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:54.156142  347381 addons.go:239] Setting addon default-storageclass=true in "newest-cni-239505"
	W1119 02:35:54.156326  347381 addons.go:248] addon default-storageclass should already be in state true
	I1119 02:35:54.156404  347381 host.go:66] Checking if "newest-cni-239505" exists ...
	I1119 02:35:54.156708  347381 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 02:35:54.157245  347381 cli_runner.go:164] Run: docker container inspect newest-cni-239505 --format={{.State.Status}}
	I1119 02:35:54.157731  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 02:35:54.157749  347381 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 02:35:54.157798  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:54.188718  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:54.190320  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:54.197862  347381 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:35:54.197888  347381 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:35:54.197964  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:54.202966  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:54.229773  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:54.293796  347381 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:35:54.311195  347381 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:35:54.311273  347381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:35:54.314730  347381 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1119 02:35:54.314754  347381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1119 02:35:54.316243  347381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:35:54.324201  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 02:35:54.324227  347381 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 02:35:54.327720  347381 api_server.go:72] duration metric: took 205.965134ms to wait for apiserver process to appear ...
	I1119 02:35:54.327742  347381 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:35:54.327761  347381 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:35:54.333078  347381 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1119 02:35:54.333111  347381 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1119 02:35:54.342584  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 02:35:54.342783  347381 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 02:35:54.351267  347381 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 02:35:54.351293  347381 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1119 02:35:54.351611  347381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:35:54.359513  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 02:35:54.359539  347381 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 02:35:54.371430  347381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 02:35:54.381932  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 02:35:54.381957  347381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 02:35:54.403472  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 02:35:54.403498  347381 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 02:35:54.431412  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 02:35:54.431437  347381 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 02:35:54.457683  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 02:35:54.457769  347381 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 02:35:54.477806  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 02:35:54.477829  347381 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 02:35:54.495653  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:35:54.495678  347381 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 02:35:54.512120  347381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:35:55.625832  347381 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 02:35:55.625866  347381 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 02:35:55.625881  347381 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:35:55.644050  347381 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 02:35:55.644079  347381 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 02:35:55.828689  347381 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:35:55.832958  347381 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:35:55.832987  347381 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:35:56.219881  347381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.903601528s)
	I1119 02:35:56.219948  347381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.868306414s)
	I1119 02:35:56.220191  347381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.848732707s)
	I1119 02:35:56.220217  347381 addons.go:480] Verifying addon metrics-server=true in "newest-cni-239505"
	I1119 02:35:56.220311  347381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.708152783s)
	I1119 02:35:56.221760  347381 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-239505 addons enable metrics-server
	
	I1119 02:35:56.233075  347381 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1119 02:35:56.234402  347381 addons.go:515] duration metric: took 2.11245939s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1119 02:35:56.328111  347381 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:35:56.332360  347381 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:35:56.332408  347381 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:35:56.827902  347381 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:35:56.833296  347381 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 02:35:56.834340  347381 api_server.go:141] control plane version: v1.34.1
	I1119 02:35:56.834410  347381 api_server.go:131] duration metric: took 2.50665736s to wait for apiserver health ...
	I1119 02:35:56.834426  347381 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:35:56.837834  347381 system_pods.go:59] 9 kube-system pods found
	I1119 02:35:56.837863  347381 system_pods.go:61] "coredns-66bc5c9577-z2w74" [99f74e7f-9a36-4a6a-ac0c-0e60c6ae6208] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 02:35:56.837904  347381 system_pods.go:61] "etcd-newest-cni-239505" [e289db86-17ff-43b8-8efc-7dc7685bc943] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:35:56.837919  347381 system_pods.go:61] "kindnet-xc5xw" [0a431aa6-0127-4041-9a89-b99531aabc57] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 02:35:56.837928  347381 system_pods.go:61] "kube-apiserver-newest-cni-239505" [6cf242dd-09d6-42f1-9dcb-700f6f28e5ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:35:56.837946  347381 system_pods.go:61] "kube-controller-manager-newest-cni-239505" [0225b199-6c83-44b0-8137-10dd97a97ff0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:35:56.837956  347381 system_pods.go:61] "kube-proxy-jq9v9" [dc396cd8-ad47-4e4b-bd85-9aae772343e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 02:35:56.837966  347381 system_pods.go:61] "kube-scheduler-newest-cni-239505" [932b6e95-7566-4c69-a21a-26f7e913cb5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:35:56.837971  347381 system_pods.go:61] "metrics-server-746fcd58dc-dmggt" [2a4cdf1a-9087-4e82-bf14-03c030548aeb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 02:35:56.837978  347381 system_pods.go:61] "storage-provisioner" [002f233f-52fa-4a85-a93d-c871a0172fba] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 02:35:56.838005  347381 system_pods.go:74] duration metric: took 3.567484ms to wait for pod list to return data ...
	I1119 02:35:56.838016  347381 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:35:56.840123  347381 default_sa.go:45] found service account: "default"
	I1119 02:35:56.840145  347381 default_sa.go:55] duration metric: took 2.119586ms for default service account to be created ...
	I1119 02:35:56.840156  347381 kubeadm.go:587] duration metric: took 2.718405976s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 02:35:56.840170  347381 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:35:56.842618  347381 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:35:56.842654  347381 node_conditions.go:123] node cpu capacity is 8
	I1119 02:35:56.842669  347381 node_conditions.go:105] duration metric: took 2.494569ms to run NodePressure ...
	I1119 02:35:56.842684  347381 start.go:242] waiting for startup goroutines ...
	I1119 02:35:56.842697  347381 start.go:247] waiting for cluster config update ...
	I1119 02:35:56.842715  347381 start.go:256] writing updated cluster config ...
	I1119 02:35:56.843019  347381 ssh_runner.go:195] Run: rm -f paused
	I1119 02:35:56.891775  347381 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:35:56.893703  347381 out.go:179] * Done! kubectl is now configured to use "newest-cni-239505" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	96973ef75ad93       56cc512116c8f       8 seconds ago       Running             busybox                   0                   2789d7cc28880       busybox                                                default
	b3afe30e00aab       52546a367cc9e       13 seconds ago      Running             coredns                   0                   8bc4ff9d8b6f6       coredns-66bc5c9577-8tnd6                               kube-system
	8610ab673ef0a       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   e82c75d2a4f19       storage-provisioner                                    kube-system
	744311138e698       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   13942c625a07a       kindnet-ddmgw                                          kube-system
	e82207bf97f93       fc25172553d79       24 seconds ago      Running             kube-proxy                0                   45243c52824ca       kube-proxy-lk5qw                                       kube-system
	a2a6000099a95       7dd6aaa1717ab       36 seconds ago      Running             kube-scheduler            0                   f3ee1be370bf4       kube-scheduler-default-k8s-diff-port-543625            kube-system
	9781005051618       c80c8dbafe7dd       36 seconds ago      Running             kube-controller-manager   0                   19ab0b6718d4d       kube-controller-manager-default-k8s-diff-port-543625   kube-system
	f4a53d6b3d755       5f1f5298c888d       36 seconds ago      Running             etcd                      0                   b34ed4f5bb328       etcd-default-k8s-diff-port-543625                      kube-system
	4ee73feddb3ba       c3994bc696102       36 seconds ago      Running             kube-apiserver            0                   25288b9b95008       kube-apiserver-default-k8s-diff-port-543625            kube-system
	
	
	==> containerd <==
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.750488233Z" level=info msg="Container 8610ab673ef0af7d8c2680d80e214be7be70891606420a0d565e1f64e7bacad2: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.756483944Z" level=info msg="CreateContainer within sandbox \"e82c75d2a4f19bc67cd719cff710510e5bbe0b04985ac60e3669a094d3a366f7\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"8610ab673ef0af7d8c2680d80e214be7be70891606420a0d565e1f64e7bacad2\""
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.757156664Z" level=info msg="StartContainer for \"8610ab673ef0af7d8c2680d80e214be7be70891606420a0d565e1f64e7bacad2\""
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.758116308Z" level=info msg="connecting to shim 8610ab673ef0af7d8c2680d80e214be7be70891606420a0d565e1f64e7bacad2" address="unix:///run/containerd/s/29404008e72b4c691b4fcb32aa12f6d78ecd3e0226338ddc737bc81c010cf16c" protocol=ttrpc version=3
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.758155146Z" level=info msg="CreateContainer within sandbox \"8bc4ff9d8b6f68e70f335d4de7db1454e887420855b73d68414a79ede27d8bb2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b3afe30e00aab3535a8058dfb8669ebfe41c8ba31866027f1b5648fe117eb425\""
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.758676150Z" level=info msg="StartContainer for \"b3afe30e00aab3535a8058dfb8669ebfe41c8ba31866027f1b5648fe117eb425\""
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.759581042Z" level=info msg="connecting to shim b3afe30e00aab3535a8058dfb8669ebfe41c8ba31866027f1b5648fe117eb425" address="unix:///run/containerd/s/6d0a92216c20298c951bdae7c2e61b6e7e0c7ec19c0fc6b8cd4cfc24eff6e7ae" protocol=ttrpc version=3
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.814533499Z" level=info msg="StartContainer for \"8610ab673ef0af7d8c2680d80e214be7be70891606420a0d565e1f64e7bacad2\" returns successfully"
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.818780462Z" level=info msg="StartContainer for \"b3afe30e00aab3535a8058dfb8669ebfe41c8ba31866027f1b5648fe117eb425\" returns successfully"
	Nov 19 02:35:52 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:52.289127194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2c7a1e56-5397-4855-a23a-6fee9e7c0a32,Namespace:default,Attempt:0,}"
	Nov 19 02:35:52 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:52.333972633Z" level=info msg="connecting to shim 2789d7cc28880c13680d634a036dd2d7cf21653c9760f13f29b8796450a228f0" address="unix:///run/containerd/s/438a7fa212400e014d875c82c7cc5629c1fd1d192e60a69ea4c2318c6798f4e8" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 02:35:52 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:52.410189629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2c7a1e56-5397-4855-a23a-6fee9e7c0a32,Namespace:default,Attempt:0,} returns sandbox id \"2789d7cc28880c13680d634a036dd2d7cf21653c9760f13f29b8796450a228f0\""
	Nov 19 02:35:52 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:52.412352818Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.491711780Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.492636123Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396645"
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.494022411Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.497158896Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.497769517Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.085306473s"
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.497818397Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.501981223Z" level=info msg="CreateContainer within sandbox \"2789d7cc28880c13680d634a036dd2d7cf21653c9760f13f29b8796450a228f0\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.509954827Z" level=info msg="Container 96973ef75ad93373dc6e9c31279849e5ac18ee85c927802d51f3eedf214c3a25: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.516859875Z" level=info msg="CreateContainer within sandbox \"2789d7cc28880c13680d634a036dd2d7cf21653c9760f13f29b8796450a228f0\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"96973ef75ad93373dc6e9c31279849e5ac18ee85c927802d51f3eedf214c3a25\""
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.517695897Z" level=info msg="StartContainer for \"96973ef75ad93373dc6e9c31279849e5ac18ee85c927802d51f3eedf214c3a25\""
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.518645822Z" level=info msg="connecting to shim 96973ef75ad93373dc6e9c31279849e5ac18ee85c927802d51f3eedf214c3a25" address="unix:///run/containerd/s/438a7fa212400e014d875c82c7cc5629c1fd1d192e60a69ea4c2318c6798f4e8" protocol=ttrpc version=3
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.582309904Z" level=info msg="StartContainer for \"96973ef75ad93373dc6e9c31279849e5ac18ee85c927802d51f3eedf214c3a25\" returns successfully"
	
	
	==> coredns [b3afe30e00aab3535a8058dfb8669ebfe41c8ba31866027f1b5648fe117eb425] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38191 - 865 "HINFO IN 6651518994578924638.4371494553589656537. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017270142s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-543625
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-543625
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=default-k8s-diff-port-543625
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_35_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:35:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-543625
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:36:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:36:02 +0000   Wed, 19 Nov 2025 02:35:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:36:02 +0000   Wed, 19 Nov 2025 02:35:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:36:02 +0000   Wed, 19 Nov 2025 02:35:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:36:02 +0000   Wed, 19 Nov 2025 02:35:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-543625
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                774ecfee-a138-4988-8e59-3e7123e6ca41
	  Boot ID:                    fea1659d-b751-4f87-a281-819adf52de2d
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-8tnd6                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-default-k8s-diff-port-543625                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-ddmgw                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-543625             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-543625    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-lk5qw                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-543625             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  32s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node default-k8s-diff-port-543625 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node default-k8s-diff-port-543625 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node default-k8s-diff-port-543625 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node default-k8s-diff-port-543625 event: Registered Node default-k8s-diff-port-543625 in Controller
	  Normal  NodeReady                14s   kubelet          Node default-k8s-diff-port-543625 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[Nov19 02:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 74 0c d7 a6 53 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[ +28.680399] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 e9 7c 92 36 13 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[Nov19 02:32] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	[  +4.552839] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +11.086189] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 76 d1 26 7f 3d 08 06
	[  +0.000377] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[  +9.270754] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a2 49 fd 34 51 3b 08 06
	[  +0.000702] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +23.593864] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 86 43 5f 18 4c 08 06
	[  +0.000495] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	
	
	==> etcd [f4a53d6b3d755cc7fc34555d06b48158e4c17a007f93bbb34db6c81a5ec471cb] <==
	{"level":"warn","ts":"2025-11-19T02:35:28.457232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.465859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.475413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.482903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.490981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.499504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.506815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.515844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.524949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.533845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.543284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.548968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.556563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.564097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.570592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.585675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.593507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.601815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.609556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.618964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.627757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.648833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.656953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.667240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.733182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45386","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:36:03 up  1:18,  0 user,  load average: 3.39, 3.73, 2.69
	Linux default-k8s-diff-port-543625 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [744311138e698615688618d52b4d6fdfb5cf9572c783692108167708987fd1ee] <==
	I1119 02:35:39.051359       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:35:39.051720       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1119 02:35:39.051907       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:35:39.051931       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:35:39.051946       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:35:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:35:39.252119       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:35:39.252149       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:35:39.252160       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:35:39.252323       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:35:39.553023       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:35:39.553064       1 metrics.go:72] Registering metrics
	I1119 02:35:39.553155       1 controller.go:711] "Syncing nftables rules"
	I1119 02:35:49.254332       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:35:49.254467       1 main.go:301] handling current node
	I1119 02:35:59.254528       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:35:59.254599       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4ee73feddb3bacb34afffcfab4c3faff9115fd2236ca0b7d4d4cb1c8e2971c8e] <==
	I1119 02:35:29.432205       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:35:29.432499       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 02:35:29.437938       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:35:29.438273       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 02:35:29.440044       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:35:29.528058       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:35:30.228208       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 02:35:30.232246       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 02:35:30.232262       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:35:30.757018       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:35:30.798037       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:35:30.933786       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 02:35:30.940628       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1119 02:35:30.942030       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:35:30.949814       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:35:31.775102       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:35:32.092896       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:35:32.103901       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 02:35:32.112321       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 02:35:37.425665       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 02:35:37.628227       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:35:37.633788       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:35:37.875334       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 02:35:37.875334       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1119 02:36:02.095386       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:57554: use of closed network connection
	
	
	==> kube-controller-manager [97810050516180977cfc43fcfdc3911ff7e97009b1bb289b6963368736c25bf9] <==
	I1119 02:35:36.736837       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 02:35:36.760580       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 02:35:36.772522       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 02:35:36.772535       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 02:35:36.772685       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 02:35:36.772723       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 02:35:36.772730       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 02:35:36.772741       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 02:35:36.772807       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-543625"
	I1119 02:35:36.772854       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 02:35:36.772845       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 02:35:36.772890       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 02:35:36.773052       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 02:35:36.773117       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 02:35:36.773117       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 02:35:36.773286       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 02:35:36.773588       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 02:35:36.773672       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 02:35:36.773683       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 02:35:36.775149       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 02:35:36.777907       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:35:36.779093       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 02:35:36.787413       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 02:35:36.795039       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:35:51.774722       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e82207bf97f9335bd740e2888ec7d6935a54cf62642cb055afc9c70e92001408] <==
	I1119 02:35:38.539596       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:35:38.611890       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:35:38.712100       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:35:38.712140       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1119 02:35:38.712260       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:35:38.734413       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:35:38.734480       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:35:38.739902       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:35:38.740510       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:35:38.740548       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:35:38.741881       1 config.go:200] "Starting service config controller"
	I1119 02:35:38.741913       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:35:38.741923       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:35:38.741957       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:35:38.741980       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:35:38.741986       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:35:38.741980       1 config.go:309] "Starting node config controller"
	I1119 02:35:38.742018       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:35:38.842110       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 02:35:38.842139       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:35:38.842180       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:35:38.842330       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a2a6000099a9578a791b24a5731fecae1b80316d70c109a330f7a9ba40a353a0] <==
	E1119 02:35:29.302412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 02:35:29.302676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 02:35:29.302761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:35:29.302830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:35:29.302880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:35:29.302927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:35:29.302974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 02:35:29.303031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:35:29.304429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:35:30.103055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:35:30.126442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:35:30.155125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 02:35:30.191624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:35:30.224864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 02:35:30.224864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:35:30.235444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 02:35:30.275683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:35:30.317567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 02:35:30.362163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:35:30.362284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 02:35:30.391592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 02:35:30.431160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:35:30.469591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:35:30.553665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1119 02:35:32.688045       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:35:33 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:33.020736    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-543625" podStartSLOduration=1.020712891 podStartE2EDuration="1.020712891s" podCreationTimestamp="2025-11-19 02:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:33.020411814 +0000 UTC m=+1.157324267" watchObservedRunningTime="2025-11-19 02:35:33.020712891 +0000 UTC m=+1.157625324"
	Nov 19 02:35:33 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:33.020918    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-543625" podStartSLOduration=1.020910926 podStartE2EDuration="1.020910926s" podCreationTimestamp="2025-11-19 02:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:33.009091049 +0000 UTC m=+1.146003500" watchObservedRunningTime="2025-11-19 02:35:33.020910926 +0000 UTC m=+1.157823377"
	Nov 19 02:35:33 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:33.030229    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-543625" podStartSLOduration=1.030206154 podStartE2EDuration="1.030206154s" podCreationTimestamp="2025-11-19 02:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:33.030172066 +0000 UTC m=+1.167084519" watchObservedRunningTime="2025-11-19 02:35:33.030206154 +0000 UTC m=+1.167118608"
	Nov 19 02:35:33 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:33.050789    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-543625" podStartSLOduration=1.050765854 podStartE2EDuration="1.050765854s" podCreationTimestamp="2025-11-19 02:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:33.041204013 +0000 UTC m=+1.178116466" watchObservedRunningTime="2025-11-19 02:35:33.050765854 +0000 UTC m=+1.187678306"
	Nov 19 02:35:36 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:36.814739    1455 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 02:35:36 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:36.815447    1455 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 02:35:37 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:37.977143    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88b25c10-b469-410d-8418-e0ceaa17a8ea-lib-modules\") pod \"kindnet-ddmgw\" (UID: \"88b25c10-b469-410d-8418-e0ceaa17a8ea\") " pod="kube-system/kindnet-ddmgw"
	Nov 19 02:35:37 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:37.977203    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5tpn\" (UniqueName: \"kubernetes.io/projected/88b25c10-b469-410d-8418-e0ceaa17a8ea-kube-api-access-w5tpn\") pod \"kindnet-ddmgw\" (UID: \"88b25c10-b469-410d-8418-e0ceaa17a8ea\") " pod="kube-system/kindnet-ddmgw"
	Nov 19 02:35:37 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:37.977228    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/88b25c10-b469-410d-8418-e0ceaa17a8ea-cni-cfg\") pod \"kindnet-ddmgw\" (UID: \"88b25c10-b469-410d-8418-e0ceaa17a8ea\") " pod="kube-system/kindnet-ddmgw"
	Nov 19 02:35:37 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:37.977259    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/36f0dd8e-7095-4d43-b4f3-4a4b11b6f852-kube-proxy\") pod \"kube-proxy-lk5qw\" (UID: \"36f0dd8e-7095-4d43-b4f3-4a4b11b6f852\") " pod="kube-system/kube-proxy-lk5qw"
	Nov 19 02:35:37 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:37.977325    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36f0dd8e-7095-4d43-b4f3-4a4b11b6f852-xtables-lock\") pod \"kube-proxy-lk5qw\" (UID: \"36f0dd8e-7095-4d43-b4f3-4a4b11b6f852\") " pod="kube-system/kube-proxy-lk5qw"
	Nov 19 02:35:37 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:37.977397    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36f0dd8e-7095-4d43-b4f3-4a4b11b6f852-lib-modules\") pod \"kube-proxy-lk5qw\" (UID: \"36f0dd8e-7095-4d43-b4f3-4a4b11b6f852\") " pod="kube-system/kube-proxy-lk5qw"
	Nov 19 02:35:37 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:37.977435    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88b25c10-b469-410d-8418-e0ceaa17a8ea-xtables-lock\") pod \"kindnet-ddmgw\" (UID: \"88b25c10-b469-410d-8418-e0ceaa17a8ea\") " pod="kube-system/kindnet-ddmgw"
	Nov 19 02:35:37 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:37.977463    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfj9v\" (UniqueName: \"kubernetes.io/projected/36f0dd8e-7095-4d43-b4f3-4a4b11b6f852-kube-api-access-rfj9v\") pod \"kube-proxy-lk5qw\" (UID: \"36f0dd8e-7095-4d43-b4f3-4a4b11b6f852\") " pod="kube-system/kube-proxy-lk5qw"
	Nov 19 02:35:39 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:39.007210    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-ddmgw" podStartSLOduration=2.007188518 podStartE2EDuration="2.007188518s" podCreationTimestamp="2025-11-19 02:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:39.006431519 +0000 UTC m=+7.143343972" watchObservedRunningTime="2025-11-19 02:35:39.007188518 +0000 UTC m=+7.144100949"
	Nov 19 02:35:39 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:39.016661    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lk5qw" podStartSLOduration=2.016643307 podStartE2EDuration="2.016643307s" podCreationTimestamp="2025-11-19 02:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:39.016280341 +0000 UTC m=+7.153192796" watchObservedRunningTime="2025-11-19 02:35:39.016643307 +0000 UTC m=+7.153555755"
	Nov 19 02:35:49 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:49.307038    1455 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 02:35:49 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:49.361234    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb5vf\" (UniqueName: \"kubernetes.io/projected/b72767a4-d2fe-420c-9cd5-7877ae681fd5-kube-api-access-zb5vf\") pod \"storage-provisioner\" (UID: \"b72767a4-d2fe-420c-9cd5-7877ae681fd5\") " pod="kube-system/storage-provisioner"
	Nov 19 02:35:49 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:49.361310    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01ac50b7-4308-4544-8340-ae41c3dd2992-config-volume\") pod \"coredns-66bc5c9577-8tnd6\" (UID: \"01ac50b7-4308-4544-8340-ae41c3dd2992\") " pod="kube-system/coredns-66bc5c9577-8tnd6"
	Nov 19 02:35:49 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:49.361341    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwzhc\" (UniqueName: \"kubernetes.io/projected/01ac50b7-4308-4544-8340-ae41c3dd2992-kube-api-access-vwzhc\") pod \"coredns-66bc5c9577-8tnd6\" (UID: \"01ac50b7-4308-4544-8340-ae41c3dd2992\") " pod="kube-system/coredns-66bc5c9577-8tnd6"
	Nov 19 02:35:49 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:49.361434    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b72767a4-d2fe-420c-9cd5-7877ae681fd5-tmp\") pod \"storage-provisioner\" (UID: \"b72767a4-d2fe-420c-9cd5-7877ae681fd5\") " pod="kube-system/storage-provisioner"
	Nov 19 02:35:50 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:50.047162    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8tnd6" podStartSLOduration=13.047135205 podStartE2EDuration="13.047135205s" podCreationTimestamp="2025-11-19 02:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:50.035856263 +0000 UTC m=+18.172768715" watchObservedRunningTime="2025-11-19 02:35:50.047135205 +0000 UTC m=+18.184047702"
	Nov 19 02:35:51 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:51.968876    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.968850472 podStartE2EDuration="13.968850472s" podCreationTimestamp="2025-11-19 02:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:50.057869094 +0000 UTC m=+18.194781547" watchObservedRunningTime="2025-11-19 02:35:51.968850472 +0000 UTC m=+20.105762908"
	Nov 19 02:35:52 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:52.081882    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmtcx\" (UniqueName: \"kubernetes.io/projected/2c7a1e56-5397-4855-a23a-6fee9e7c0a32-kube-api-access-rmtcx\") pod \"busybox\" (UID: \"2c7a1e56-5397-4855-a23a-6fee9e7c0a32\") " pod="default/busybox"
	Nov 19 02:35:55 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:55.048595    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.9616222429999999 podStartE2EDuration="4.048577252s" podCreationTimestamp="2025-11-19 02:35:51 +0000 UTC" firstStartedPulling="2025-11-19 02:35:52.411824196 +0000 UTC m=+20.548736643" lastFinishedPulling="2025-11-19 02:35:54.498779207 +0000 UTC m=+22.635691652" observedRunningTime="2025-11-19 02:35:55.048406308 +0000 UTC m=+23.185318761" watchObservedRunningTime="2025-11-19 02:35:55.048577252 +0000 UTC m=+23.185489700"
	
	
	==> storage-provisioner [8610ab673ef0af7d8c2680d80e214be7be70891606420a0d565e1f64e7bacad2] <==
	I1119 02:35:49.823321       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:35:49.831350       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:35:49.831412       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 02:35:49.834145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:49.839499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:35:49.839634       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:35:49.839814       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-543625_07de42df-e823-4a34-99b0-901998666d9e!
	I1119 02:35:49.839811       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4392a95d-b5b8-4658-a497-3ce97f257fa7", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-543625_07de42df-e823-4a34-99b0-901998666d9e became leader
	W1119 02:35:49.841940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:49.845445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:35:49.940353       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-543625_07de42df-e823-4a34-99b0-901998666d9e!
	W1119 02:35:51.848539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:51.856831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:53.861799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:53.867140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:55.870810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:55.876681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:57.879801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:57.884618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:59.888265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:59.893555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:36:01.896972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:36:01.901710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-543625 -n default-k8s-diff-port-543625
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-543625 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-543625
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-543625:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fd06141a46e5e66ab1beae58a100efdf29e826e0d2730f65161dc84d760a8ffd",
	        "Created": "2025-11-19T02:35:14.844697606Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 338070,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:35:14.879624025Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/fd06141a46e5e66ab1beae58a100efdf29e826e0d2730f65161dc84d760a8ffd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fd06141a46e5e66ab1beae58a100efdf29e826e0d2730f65161dc84d760a8ffd/hostname",
	        "HostsPath": "/var/lib/docker/containers/fd06141a46e5e66ab1beae58a100efdf29e826e0d2730f65161dc84d760a8ffd/hosts",
	        "LogPath": "/var/lib/docker/containers/fd06141a46e5e66ab1beae58a100efdf29e826e0d2730f65161dc84d760a8ffd/fd06141a46e5e66ab1beae58a100efdf29e826e0d2730f65161dc84d760a8ffd-json.log",
	        "Name": "/default-k8s-diff-port-543625",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-543625:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-543625",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fd06141a46e5e66ab1beae58a100efdf29e826e0d2730f65161dc84d760a8ffd",
	                "LowerDir": "/var/lib/docker/overlay2/d8992b7ca8b4c201f666f935519d4aa8313a6165602d1554fa18ba5ed871d2b9-init/diff:/var/lib/docker/overlay2/de7938e6a920c133c8c6b988444cfbf6706fdc6982445229ca70e2488a725edb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8992b7ca8b4c201f666f935519d4aa8313a6165602d1554fa18ba5ed871d2b9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8992b7ca8b4c201f666f935519d4aa8313a6165602d1554fa18ba5ed871d2b9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8992b7ca8b4c201f666f935519d4aa8313a6165602d1554fa18ba5ed871d2b9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-543625",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-543625/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-543625",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-543625",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-543625",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fc44185c184b24d2cd8804a0e72b24c36ea5710b5b04bb7958b94cdfd71e04c7",
	            "SandboxKey": "/var/run/docker/netns/fc44185c184b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-543625": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4119a42ae4484eab7e0723b03a220f641087de3f84e75c47464b02779c15ff0f",
	                    "EndpointID": "339755520b9118d219dbffcae52ea998401518b0b7ff319fe9bbc3f084e309e5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "ce:15:97:7f:d5:67",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-543625",
	                        "fd06141a46e5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-543625 -n default-k8s-diff-port-543625
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-543625 logs -n 25
E1119 02:36:04.513117   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/custom-flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-543625 logs -n 25: (1.016946885s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ delete  │ -p old-k8s-version-691094                                                                                                                                                                                                                           │ old-k8s-version-691094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p old-k8s-version-691094                                                                                                                                                                                                                           │ old-k8s-version-691094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p disable-driver-mounts-433931                                                                                                                                                                                                                     │ disable-driver-mounts-433931 │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ start   │ -p default-k8s-diff-port-543625 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-543625 │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ image   │ no-preload-483142 image list --format=json                                                                                                                                                                                                          │ no-preload-483142            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ pause   │ -p no-preload-483142 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-483142            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p kubernetes-upgrade-896338                                                                                                                                                                                                                        │ kubernetes-upgrade-896338    │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ unpause │ -p no-preload-483142 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-483142            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p no-preload-483142                                                                                                                                                                                                                                │ no-preload-483142            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ start   │ -p newest-cni-239505 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p no-preload-483142                                                                                                                                                                                                                                │ no-preload-483142            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ image   │ embed-certs-168452 image list --format=json                                                                                                                                                                                                         │ embed-certs-168452           │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ pause   │ -p embed-certs-168452 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-168452           │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ unpause │ -p embed-certs-168452 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-168452           │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p embed-certs-168452                                                                                                                                                                                                                               │ embed-certs-168452           │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p embed-certs-168452                                                                                                                                                                                                                               │ embed-certs-168452           │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ addons  │ enable metrics-server -p newest-cni-239505 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ stop    │ -p newest-cni-239505 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ addons  │ enable dashboard -p newest-cni-239505 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ start   │ -p newest-cni-239505 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ image   │ newest-cni-239505 image list --format=json                                                                                                                                                                                                          │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ pause   │ -p newest-cni-239505 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ unpause │ -p newest-cni-239505 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:35 UTC │ 19 Nov 25 02:35 UTC │
	│ delete  │ -p newest-cni-239505                                                                                                                                                                                                                                │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:36 UTC │ 19 Nov 25 02:36 UTC │
	│ delete  │ -p newest-cni-239505                                                                                                                                                                                                                                │ newest-cni-239505            │ jenkins │ v1.37.0 │ 19 Nov 25 02:36 UTC │ 19 Nov 25 02:36 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:35:47
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:35:47.400837  347381 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:35:47.400970  347381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:35:47.400984  347381 out.go:374] Setting ErrFile to fd 2...
	I1119 02:35:47.400989  347381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:35:47.401216  347381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:35:47.401739  347381 out.go:368] Setting JSON to false
	I1119 02:35:47.402804  347381 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4687,"bootTime":1763515060,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:35:47.402900  347381 start.go:143] virtualization: kvm guest
	I1119 02:35:47.404818  347381 out.go:179] * [newest-cni-239505] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:35:47.405933  347381 notify.go:221] Checking for updates...
	I1119 02:35:47.405966  347381 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:35:47.407382  347381 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:35:47.408865  347381 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:35:47.410079  347381 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	I1119 02:35:47.411263  347381 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:35:47.412491  347381 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:35:47.414125  347381 config.go:182] Loaded profile config "newest-cni-239505": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:35:47.414597  347381 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:35:47.439704  347381 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:35:47.439787  347381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:35:47.497896  347381 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-19 02:35:47.48750667 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:35:47.498059  347381 docker.go:319] overlay module found
	I1119 02:35:47.499650  347381 out.go:179] * Using the docker driver based on existing profile
	I1119 02:35:47.500795  347381 start.go:309] selected driver: docker
	I1119 02:35:47.500811  347381 start.go:930] validating driver "docker" against &{Name:newest-cni-239505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-239505 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:35:47.500922  347381 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:35:47.501645  347381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:35:47.559773  347381 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-19 02:35:47.550467875 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:35:47.560039  347381 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 02:35:47.560072  347381 cni.go:84] Creating CNI manager for ""
	I1119 02:35:47.560118  347381 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:35:47.560156  347381 start.go:353] cluster config:
	{Name:newest-cni-239505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-239505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:35:47.562111  347381 out.go:179] * Starting "newest-cni-239505" primary control-plane node in "newest-cni-239505" cluster
	I1119 02:35:47.563255  347381 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 02:35:47.564636  347381 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:35:47.565884  347381 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:35:47.565918  347381 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1119 02:35:47.565925  347381 cache.go:65] Caching tarball of preloaded images
	I1119 02:35:47.565998  347381 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:35:47.566042  347381 preload.go:238] Found /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1119 02:35:47.566053  347381 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1119 02:35:47.566186  347381 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/config.json ...
	I1119 02:35:47.587344  347381 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:35:47.587383  347381 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:35:47.587402  347381 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:35:47.587428  347381 start.go:360] acquireMachinesLock for newest-cni-239505: {Name:mke53f1011bae5762647d8cf2de4903cc4de19ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:35:47.587481  347381 start.go:364] duration metric: took 35.624µs to acquireMachinesLock for "newest-cni-239505"
	I1119 02:35:47.587498  347381 start.go:96] Skipping create...Using existing machine configuration
	I1119 02:35:47.587503  347381 fix.go:54] fixHost starting: 
	I1119 02:35:47.587701  347381 cli_runner.go:164] Run: docker container inspect newest-cni-239505 --format={{.State.Status}}
	I1119 02:35:47.605970  347381 fix.go:112] recreateIfNeeded on newest-cni-239505: state=Stopped err=<nil>
	W1119 02:35:47.605998  347381 fix.go:138] unexpected machine state, will restart: <nil>
	W1119 02:35:44.919531  337314 node_ready.go:57] node "default-k8s-diff-port-543625" has "Ready":"False" status (will retry)
	W1119 02:35:46.919687  337314 node_ready.go:57] node "default-k8s-diff-port-543625" has "Ready":"False" status (will retry)
	W1119 02:35:48.919750  337314 node_ready.go:57] node "default-k8s-diff-port-543625" has "Ready":"False" status (will retry)
	I1119 02:35:49.420183  337314 node_ready.go:49] node "default-k8s-diff-port-543625" is "Ready"
	I1119 02:35:49.420212  337314 node_ready.go:38] duration metric: took 11.004235265s for node "default-k8s-diff-port-543625" to be "Ready" ...
	I1119 02:35:49.420228  337314 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:35:49.420271  337314 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:35:49.432830  337314 api_server.go:72] duration metric: took 11.337365597s to wait for apiserver process to appear ...
	I1119 02:35:49.432857  337314 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:35:49.432874  337314 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1119 02:35:49.438201  337314 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1119 02:35:49.439299  337314 api_server.go:141] control plane version: v1.34.1
	I1119 02:35:49.439331  337314 api_server.go:131] duration metric: took 6.466118ms to wait for apiserver health ...
	I1119 02:35:49.439342  337314 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:35:49.442756  337314 system_pods.go:59] 8 kube-system pods found
	I1119 02:35:49.442794  337314 system_pods.go:61] "coredns-66bc5c9577-8tnd6" [01ac50b7-4308-4544-8340-ae41c3dd2992] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:35:49.442801  337314 system_pods.go:61] "etcd-default-k8s-diff-port-543625" [be16351c-840a-4407-8dda-00b2d1adc11e] Running
	I1119 02:35:49.442807  337314 system_pods.go:61] "kindnet-ddmgw" [88b25c10-b469-410d-8418-e0ceaa17a8ea] Running
	I1119 02:35:49.442811  337314 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-543625" [2c449f2a-2ed8-42d3-a05b-6a3723cd057d] Running
	I1119 02:35:49.442815  337314 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-543625" [a2c42093-b10e-432b-b083-b487eb49e46c] Running
	I1119 02:35:49.442818  337314 system_pods.go:61] "kube-proxy-lk5qw" [36f0dd8e-7095-4d43-b4f3-4a4b11b6f852] Running
	I1119 02:35:49.442822  337314 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-543625" [7b32234f-d8c9-4e4d-bc07-e725a2b14c3e] Running
	I1119 02:35:49.442826  337314 system_pods.go:61] "storage-provisioner" [b72767a4-d2fe-420c-9cd5-7877ae681fd5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:35:49.442835  337314 system_pods.go:74] duration metric: took 3.486977ms to wait for pod list to return data ...
	I1119 02:35:49.442843  337314 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:35:49.445443  337314 default_sa.go:45] found service account: "default"
	I1119 02:35:49.445474  337314 default_sa.go:55] duration metric: took 2.616799ms for default service account to be created ...
	I1119 02:35:49.445483  337314 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:35:49.448952  337314 system_pods.go:86] 8 kube-system pods found
	I1119 02:35:49.448989  337314 system_pods.go:89] "coredns-66bc5c9577-8tnd6" [01ac50b7-4308-4544-8340-ae41c3dd2992] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:35:49.448998  337314 system_pods.go:89] "etcd-default-k8s-diff-port-543625" [be16351c-840a-4407-8dda-00b2d1adc11e] Running
	I1119 02:35:49.449008  337314 system_pods.go:89] "kindnet-ddmgw" [88b25c10-b469-410d-8418-e0ceaa17a8ea] Running
	I1119 02:35:49.449015  337314 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-543625" [2c449f2a-2ed8-42d3-a05b-6a3723cd057d] Running
	I1119 02:35:49.449022  337314 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-543625" [a2c42093-b10e-432b-b083-b487eb49e46c] Running
	I1119 02:35:49.449032  337314 system_pods.go:89] "kube-proxy-lk5qw" [36f0dd8e-7095-4d43-b4f3-4a4b11b6f852] Running
	I1119 02:35:49.449037  337314 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-543625" [7b32234f-d8c9-4e4d-bc07-e725a2b14c3e] Running
	I1119 02:35:49.449044  337314 system_pods.go:89] "storage-provisioner" [b72767a4-d2fe-420c-9cd5-7877ae681fd5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:35:49.449075  337314 retry.go:31] will retry after 292.278482ms: missing components: kube-dns
	I1119 02:35:49.746679  337314 system_pods.go:86] 8 kube-system pods found
	I1119 02:35:49.746721  337314 system_pods.go:89] "coredns-66bc5c9577-8tnd6" [01ac50b7-4308-4544-8340-ae41c3dd2992] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:35:49.746730  337314 system_pods.go:89] "etcd-default-k8s-diff-port-543625" [be16351c-840a-4407-8dda-00b2d1adc11e] Running
	I1119 02:35:49.746739  337314 system_pods.go:89] "kindnet-ddmgw" [88b25c10-b469-410d-8418-e0ceaa17a8ea] Running
	I1119 02:35:49.746743  337314 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-543625" [2c449f2a-2ed8-42d3-a05b-6a3723cd057d] Running
	I1119 02:35:49.746746  337314 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-543625" [a2c42093-b10e-432b-b083-b487eb49e46c] Running
	I1119 02:35:49.746751  337314 system_pods.go:89] "kube-proxy-lk5qw" [36f0dd8e-7095-4d43-b4f3-4a4b11b6f852] Running
	I1119 02:35:49.746756  337314 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-543625" [7b32234f-d8c9-4e4d-bc07-e725a2b14c3e] Running
	I1119 02:35:49.746763  337314 system_pods.go:89] "storage-provisioner" [b72767a4-d2fe-420c-9cd5-7877ae681fd5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:35:49.746782  337314 retry.go:31] will retry after 378.870289ms: missing components: kube-dns
	I1119 02:35:50.130700  337314 system_pods.go:86] 8 kube-system pods found
	I1119 02:35:50.130736  337314 system_pods.go:89] "coredns-66bc5c9577-8tnd6" [01ac50b7-4308-4544-8340-ae41c3dd2992] Running
	I1119 02:35:50.130745  337314 system_pods.go:89] "etcd-default-k8s-diff-port-543625" [be16351c-840a-4407-8dda-00b2d1adc11e] Running
	I1119 02:35:50.130752  337314 system_pods.go:89] "kindnet-ddmgw" [88b25c10-b469-410d-8418-e0ceaa17a8ea] Running
	I1119 02:35:50.130758  337314 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-543625" [2c449f2a-2ed8-42d3-a05b-6a3723cd057d] Running
	I1119 02:35:50.130761  337314 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-543625" [a2c42093-b10e-432b-b083-b487eb49e46c] Running
	I1119 02:35:50.130765  337314 system_pods.go:89] "kube-proxy-lk5qw" [36f0dd8e-7095-4d43-b4f3-4a4b11b6f852] Running
	I1119 02:35:50.130778  337314 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-543625" [7b32234f-d8c9-4e4d-bc07-e725a2b14c3e] Running
	I1119 02:35:50.130784  337314 system_pods.go:89] "storage-provisioner" [b72767a4-d2fe-420c-9cd5-7877ae681fd5] Running
	I1119 02:35:50.130795  337314 system_pods.go:126] duration metric: took 685.305524ms to wait for k8s-apps to be running ...
	I1119 02:35:50.130809  337314 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:35:50.130857  337314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:35:50.145231  337314 system_svc.go:56] duration metric: took 14.411439ms WaitForService to wait for kubelet
	I1119 02:35:50.145264  337314 kubeadm.go:587] duration metric: took 12.049805392s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:35:50.145284  337314 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:35:50.148998  337314 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:35:50.149034  337314 node_conditions.go:123] node cpu capacity is 8
	I1119 02:35:50.149050  337314 node_conditions.go:105] duration metric: took 3.75978ms to run NodePressure ...
	I1119 02:35:50.149064  337314 start.go:242] waiting for startup goroutines ...
	I1119 02:35:50.149074  337314 start.go:247] waiting for cluster config update ...
	I1119 02:35:50.149085  337314 start.go:256] writing updated cluster config ...
	I1119 02:35:50.149449  337314 ssh_runner.go:195] Run: rm -f paused
	I1119 02:35:50.153799  337314 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:35:50.157270  337314 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8tnd6" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:50.161954  337314 pod_ready.go:94] pod "coredns-66bc5c9577-8tnd6" is "Ready"
	I1119 02:35:50.161982  337314 pod_ready.go:86] duration metric: took 4.688734ms for pod "coredns-66bc5c9577-8tnd6" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:50.163888  337314 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-543625" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:50.167504  337314 pod_ready.go:94] pod "etcd-default-k8s-diff-port-543625" is "Ready"
	I1119 02:35:50.167523  337314 pod_ready.go:86] duration metric: took 3.613752ms for pod "etcd-default-k8s-diff-port-543625" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:50.169462  337314 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-543625" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:50.173228  337314 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-543625" is "Ready"
	I1119 02:35:50.173249  337314 pod_ready.go:86] duration metric: took 3.764954ms for pod "kube-apiserver-default-k8s-diff-port-543625" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:50.174980  337314 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-543625" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:50.557758  337314 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-543625" is "Ready"
	I1119 02:35:50.557782  337314 pod_ready.go:86] duration metric: took 382.784309ms for pod "kube-controller-manager-default-k8s-diff-port-543625" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:50.758083  337314 pod_ready.go:83] waiting for pod "kube-proxy-lk5qw" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:51.158326  337314 pod_ready.go:94] pod "kube-proxy-lk5qw" is "Ready"
	I1119 02:35:51.158358  337314 pod_ready.go:86] duration metric: took 400.25163ms for pod "kube-proxy-lk5qw" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:51.358261  337314 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-543625" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:51.757991  337314 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-543625" is "Ready"
	I1119 02:35:51.758016  337314 pod_ready.go:86] duration metric: took 399.729831ms for pod "kube-scheduler-default-k8s-diff-port-543625" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:35:51.758027  337314 pod_ready.go:40] duration metric: took 1.604193259s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:35:51.801860  337314 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:35:51.804130  337314 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-543625" cluster and "default" namespace by default
	I1119 02:35:47.607672  347381 out.go:252] * Restarting existing docker container for "newest-cni-239505" ...
	I1119 02:35:47.607739  347381 cli_runner.go:164] Run: docker start newest-cni-239505
	I1119 02:35:47.890163  347381 cli_runner.go:164] Run: docker container inspect newest-cni-239505 --format={{.State.Status}}
	I1119 02:35:47.909657  347381 kic.go:430] container "newest-cni-239505" state is running.
	I1119 02:35:47.910050  347381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-239505
	I1119 02:35:47.930867  347381 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/config.json ...
	I1119 02:35:47.931124  347381 machine.go:94] provisionDockerMachine start ...
	I1119 02:35:47.931201  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:47.950868  347381 main.go:143] libmachine: Using SSH client type: native
	I1119 02:35:47.951147  347381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1119 02:35:47.951170  347381 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:35:47.951759  347381 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41488->127.0.0.1:33135: read: connection reset by peer
	I1119 02:35:51.087409  347381 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-239505
	
	I1119 02:35:51.087442  347381 ubuntu.go:182] provisioning hostname "newest-cni-239505"
	I1119 02:35:51.087503  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:51.106329  347381 main.go:143] libmachine: Using SSH client type: native
	I1119 02:35:51.106588  347381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1119 02:35:51.106604  347381 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-239505 && echo "newest-cni-239505" | sudo tee /etc/hostname
	I1119 02:35:51.250055  347381 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-239505
	
	I1119 02:35:51.250142  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:51.269426  347381 main.go:143] libmachine: Using SSH client type: native
	I1119 02:35:51.269653  347381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1119 02:35:51.269675  347381 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-239505' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-239505/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-239505' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:35:51.403348  347381 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:35:51.403404  347381 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11107/.minikube}
	I1119 02:35:51.403429  347381 ubuntu.go:190] setting up certificates
	I1119 02:35:51.403448  347381 provision.go:84] configureAuth start
	I1119 02:35:51.403493  347381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-239505
	I1119 02:35:51.423151  347381 provision.go:143] copyHostCerts
	I1119 02:35:51.423228  347381 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem, removing ...
	I1119 02:35:51.423248  347381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem
	I1119 02:35:51.423399  347381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem (1082 bytes)
	I1119 02:35:51.423546  347381 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem, removing ...
	I1119 02:35:51.423560  347381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem
	I1119 02:35:51.423611  347381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem (1123 bytes)
	I1119 02:35:51.423708  347381 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem, removing ...
	I1119 02:35:51.423719  347381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem
	I1119 02:35:51.423753  347381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem (1675 bytes)
	I1119 02:35:51.423824  347381 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem org=jenkins.newest-cni-239505 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-239505]
	I1119 02:35:51.555080  347381 provision.go:177] copyRemoteCerts
	I1119 02:35:51.555155  347381 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:35:51.555200  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:51.574477  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:51.670101  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 02:35:51.688576  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 02:35:51.705998  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:35:51.723963  347381 provision.go:87] duration metric: took 320.502408ms to configureAuth
	I1119 02:35:51.723996  347381 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:35:51.724225  347381 config.go:182] Loaded profile config "newest-cni-239505": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:35:51.724241  347381 machine.go:97] duration metric: took 3.7931035s to provisionDockerMachine
	I1119 02:35:51.724251  347381 start.go:293] postStartSetup for "newest-cni-239505" (driver="docker")
	I1119 02:35:51.724263  347381 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:35:51.724318  347381 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:35:51.724401  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:51.742905  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:51.840030  347381 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:35:51.844169  347381 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:35:51.844207  347381 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:35:51.844221  347381 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/addons for local assets ...
	I1119 02:35:51.844290  347381 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/files for local assets ...
	I1119 02:35:51.844448  347381 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem -> 146572.pem in /etc/ssl/certs
	I1119 02:35:51.844571  347381 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:35:51.853823  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:35:51.873010  347381 start.go:296] duration metric: took 148.732465ms for postStartSetup
	I1119 02:35:51.873085  347381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:35:51.873116  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:51.894305  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:51.988416  347381 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:35:51.993832  347381 fix.go:56] duration metric: took 4.406321541s for fixHost
	I1119 02:35:51.993857  347381 start.go:83] releasing machines lock for "newest-cni-239505", held for 4.406364595s
	I1119 02:35:51.993922  347381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-239505
	I1119 02:35:52.012608  347381 ssh_runner.go:195] Run: cat /version.json
	I1119 02:35:52.012653  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:52.012698  347381 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:35:52.012764  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:52.033522  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:52.033946  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:52.181828  347381 ssh_runner.go:195] Run: systemctl --version
	I1119 02:35:52.188860  347381 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:35:52.193697  347381 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:35:52.193761  347381 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:35:52.201832  347381 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 02:35:52.201855  347381 start.go:496] detecting cgroup driver to use...
	I1119 02:35:52.201882  347381 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 02:35:52.201918  347381 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1119 02:35:52.218073  347381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1119 02:35:52.230964  347381 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:35:52.231008  347381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:35:52.246109  347381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:35:52.258818  347381 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:35:52.343216  347381 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:35:52.432854  347381 docker.go:234] disabling docker service ...
	I1119 02:35:52.432916  347381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:35:52.448350  347381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:35:52.461059  347381 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:35:52.538432  347381 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:35:52.617421  347381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:35:52.629879  347381 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:35:52.644238  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1119 02:35:52.653177  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1119 02:35:52.662177  347381 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1119 02:35:52.662268  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1119 02:35:52.671552  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:35:52.680263  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1119 02:35:52.689420  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1119 02:35:52.698659  347381 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:35:52.707126  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1119 02:35:52.716477  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1119 02:35:52.725757  347381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1119 02:35:52.734728  347381 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:35:52.742163  347381 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:35:52.749627  347381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:35:52.828250  347381 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1119 02:35:52.932734  347381 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1119 02:35:52.932786  347381 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1119 02:35:52.936971  347381 start.go:564] Will wait 60s for crictl version
	I1119 02:35:52.937019  347381 ssh_runner.go:195] Run: which crictl
	I1119 02:35:52.940531  347381 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:35:52.965452  347381 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1119 02:35:52.965524  347381 ssh_runner.go:195] Run: containerd --version
	I1119 02:35:52.986594  347381 ssh_runner.go:195] Run: containerd --version
	I1119 02:35:53.009596  347381 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1119 02:35:53.011151  347381 cli_runner.go:164] Run: docker network inspect newest-cni-239505 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:35:53.030555  347381 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 02:35:53.034715  347381 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:35:53.047172  347381 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 02:35:53.048457  347381 kubeadm.go:884] updating cluster {Name:newest-cni-239505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-239505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:35:53.048602  347381 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 02:35:53.048674  347381 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:35:53.074705  347381 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:35:53.074723  347381 containerd.go:534] Images already preloaded, skipping extraction
	I1119 02:35:53.074769  347381 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:35:53.100350  347381 containerd.go:627] all images are preloaded for containerd runtime.
	I1119 02:35:53.100396  347381 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:35:53.100406  347381 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1119 02:35:53.100497  347381 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-239505 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-239505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:35:53.100552  347381 ssh_runner.go:195] Run: sudo crictl info
	I1119 02:35:53.127295  347381 cni.go:84] Creating CNI manager for ""
	I1119 02:35:53.127316  347381 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 02:35:53.127330  347381 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 02:35:53.127349  347381 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-239505 NodeName:newest-cni-239505 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:35:53.127508  347381 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-239505"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:35:53.127572  347381 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:35:53.135646  347381 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:35:53.135722  347381 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:35:53.143558  347381 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1119 02:35:53.156672  347381 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:35:53.169938  347381 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1119 02:35:53.184132  347381 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:35:53.188213  347381 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:35:53.199678  347381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:35:53.280671  347381 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:35:53.305099  347381 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505 for IP: 192.168.85.2
	I1119 02:35:53.305124  347381 certs.go:195] generating shared ca certs ...
	I1119 02:35:53.305145  347381 certs.go:227] acquiring lock for ca certs: {Name:mk11d6789b2333e17b3937495b501fbcca15c242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:35:53.305300  347381 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key
	I1119 02:35:53.305343  347381 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key
	I1119 02:35:53.305353  347381 certs.go:257] generating profile certs ...
	I1119 02:35:53.305468  347381 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/client.key
	I1119 02:35:53.305518  347381 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/apiserver.key.3b68aa73
	I1119 02:35:53.305553  347381 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/proxy-client.key
	I1119 02:35:53.305671  347381 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem (1338 bytes)
	W1119 02:35:53.305702  347381 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657_empty.pem, impossibly tiny 0 bytes
	I1119 02:35:53.305712  347381 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 02:35:53.305732  347381 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem (1082 bytes)
	I1119 02:35:53.305756  347381 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:35:53.305778  347381 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem (1675 bytes)
	I1119 02:35:53.305817  347381 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem (1708 bytes)
	I1119 02:35:53.306462  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:35:53.326098  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 02:35:53.344940  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:35:53.363844  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:35:53.387719  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 02:35:53.409627  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:35:53.429288  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:35:53.448417  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/newest-cni-239505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 02:35:53.466531  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /usr/share/ca-certificates/146572.pem (1708 bytes)
	I1119 02:35:53.484649  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:35:53.504556  347381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem --> /usr/share/ca-certificates/14657.pem (1338 bytes)
	I1119 02:35:53.524324  347381 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:35:53.538770  347381 ssh_runner.go:195] Run: openssl version
	I1119 02:35:53.545089  347381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:35:53.554642  347381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:35:53.558994  347381 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:57 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:35:53.559065  347381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:35:53.595031  347381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:35:53.604362  347381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14657.pem && ln -fs /usr/share/ca-certificates/14657.pem /etc/ssl/certs/14657.pem"
	I1119 02:35:53.613125  347381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14657.pem
	I1119 02:35:53.616990  347381 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14657.pem
	I1119 02:35:53.617052  347381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14657.pem
	I1119 02:35:53.651629  347381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14657.pem /etc/ssl/certs/51391683.0"
	I1119 02:35:53.660258  347381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146572.pem && ln -fs /usr/share/ca-certificates/146572.pem /etc/ssl/certs/146572.pem"
	I1119 02:35:53.669226  347381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146572.pem
	I1119 02:35:53.673309  347381 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146572.pem
	I1119 02:35:53.673406  347381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146572.pem
	I1119 02:35:53.708041  347381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146572.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:35:53.716703  347381 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:35:53.721178  347381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 02:35:53.757789  347381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 02:35:53.794527  347381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 02:35:53.837676  347381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 02:35:53.893972  347381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 02:35:53.950806  347381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 02:35:54.004163  347381 kubeadm.go:401] StartCluster: {Name:newest-cni-239505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-239505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:35:54.004290  347381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1119 02:35:54.004386  347381 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:35:54.048518  347381 cri.go:89] found id: "896e7ba9918c57df7870e60ee2adfd4ca68e4338b6a5c1f04c9a3cb7d1674332"
	I1119 02:35:54.048542  347381 cri.go:89] found id: "f1fae129650bdbd767c99285efd7d9eb755597dfbc088ab6ee9224ecf8ddf166"
	I1119 02:35:54.048548  347381 cri.go:89] found id: "44349b33052492aa2604afff47ff7a4ddcf5219f43a47923fa749dbb520c7b05"
	I1119 02:35:54.048552  347381 cri.go:89] found id: "394de3b4f48c01a85e5b574db877f5c3334a77a0463ddbe582de92b32e2833d7"
	I1119 02:35:54.048556  347381 cri.go:89] found id: "3666964818012148dfe702358f6c5b27f9578c563a1c811a27e8021a2c2ff2eb"
	I1119 02:35:54.048562  347381 cri.go:89] found id: "5793bf39ee28f256f3b97ac85ab00dd5f99741dd895d82536db1f9d3bc949d44"
	I1119 02:35:54.048565  347381 cri.go:89] found id: "df9e67fb6ca6844de2b82323fda742288ea474a4187c8e77dde384e7f2fe1aa6"
	I1119 02:35:54.048578  347381 cri.go:89] found id: "56b78f46816862ab6f488a86861ccb62a8b8fd4aede37d3b21ade91af763aa96"
	I1119 02:35:54.048582  347381 cri.go:89] found id: "9156f446b7b25b9d9d01460e58eda771b3f4a04a1c3d55a14391065cc55a1560"
	I1119 02:35:54.048627  347381 cri.go:89] found id: "5c587b93ffb1ce6e0fb1e971f6c70168e2813290fbb13003dea1688c7063686b"
	I1119 02:35:54.048635  347381 cri.go:89] found id: ""
	I1119 02:35:54.048685  347381 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1119 02:35:54.082340  347381 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"0a5f67efee16c45c483b238e0efa2ed85d84df9112a6f64b6784b2ab24654773","pid":847,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a5f67efee16c45c483b238e0efa2ed85d84df9112a6f64b6784b2ab24654773","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a5f67efee16c45c483b238e0efa2ed85d84df9112a6f64b6784b2ab24654773/rootfs","created":"2025-11-19T02:35:53.889699047Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"0a5f67efee16c45c483b238e0efa2ed85d84df9112a6f64b6784b2ab24654773","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-239505_0587d0fb67ac1bf68b023e756b989c11","io.kubernetes.cri.sand
box-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-239505","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0587d0fb67ac1bf68b023e756b989c11"},"owner":"root"},{"ociVersion":"1.2.1","id":"394de3b4f48c01a85e5b574db877f5c3334a77a0463ddbe582de92b32e2833d7","pid":935,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/394de3b4f48c01a85e5b574db877f5c3334a77a0463ddbe582de92b32e2833d7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/394de3b4f48c01a85e5b574db877f5c3334a77a0463ddbe582de92b32e2833d7/rootfs","created":"2025-11-19T02:35:54.004488733Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"3c1254266e465449e256adf1cd3f71dc93083029bcb62014a050338c640350bf","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-239505","io.kubernet
es.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ce56d7590bb14d4c47c79f7a8212f6f8"},"owner":"root"},{"ociVersion":"1.2.1","id":"3c1254266e465449e256adf1cd3f71dc93083029bcb62014a050338c640350bf","pid":834,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c1254266e465449e256adf1cd3f71dc93083029bcb62014a050338c640350bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c1254266e465449e256adf1cd3f71dc93083029bcb62014a050338c640350bf/rootfs","created":"2025-11-19T02:35:53.88307603Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"3c1254266e465449e256adf1cd3f71dc93083029bcb62014a050338c640350bf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-239505_ce
56d7590bb14d4c47c79f7a8212f6f8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-239505","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ce56d7590bb14d4c47c79f7a8212f6f8"},"owner":"root"},{"ociVersion":"1.2.1","id":"44349b33052492aa2604afff47ff7a4ddcf5219f43a47923fa749dbb520c7b05","pid":947,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/44349b33052492aa2604afff47ff7a4ddcf5219f43a47923fa749dbb520c7b05","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/44349b33052492aa2604afff47ff7a4ddcf5219f43a47923fa749dbb520c7b05/rootfs","created":"2025-11-19T02:35:54.016290745Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"0a5f67efee16c45c483b238e0efa2ed85d84df9112a6f64b6784b2ab24654773","io.kubernetes.c
ri.sandbox-name":"kube-controller-manager-newest-cni-239505","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0587d0fb67ac1bf68b023e756b989c11"},"owner":"root"},{"ociVersion":"1.2.1","id":"896e7ba9918c57df7870e60ee2adfd4ca68e4338b6a5c1f04c9a3cb7d1674332","pid":979,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/896e7ba9918c57df7870e60ee2adfd4ca68e4338b6a5c1f04c9a3cb7d1674332","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/896e7ba9918c57df7870e60ee2adfd4ca68e4338b6a5c1f04c9a3cb7d1674332/rootfs","created":"2025-11-19T02:35:54.023734678Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"bfced899161a0337ae5adf67427b33a3b8f27c734b69eda80fd7b34a527982f3","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-239505","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.san
dbox-uid":"bcf32e34d14ad59cd0ad1ca743424b20"},"owner":"root"},{"ociVersion":"1.2.1","id":"bfced899161a0337ae5adf67427b33a3b8f27c734b69eda80fd7b34a527982f3","pid":871,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfced899161a0337ae5adf67427b33a3b8f27c734b69eda80fd7b34a527982f3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfced899161a0337ae5adf67427b33a3b8f27c734b69eda80fd7b34a527982f3/rootfs","created":"2025-11-19T02:35:53.902079564Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bfced899161a0337ae5adf67427b33a3b8f27c734b69eda80fd7b34a527982f3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-239505_bcf32e34d14ad59cd0ad1ca743424b20","io.kubernetes.cri.sandbox-memory":"0",
"io.kubernetes.cri.sandbox-name":"etcd-newest-cni-239505","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"bcf32e34d14ad59cd0ad1ca743424b20"},"owner":"root"},{"ociVersion":"1.2.1","id":"d8a2b8d0e3fa887b4bde6ebcfa5d21cabbbcde892d351228350666759e15659a","pid":864,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8a2b8d0e3fa887b4bde6ebcfa5d21cabbbcde892d351228350666759e15659a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8a2b8d0e3fa887b4bde6ebcfa5d21cabbbcde892d351228350666759e15659a/rootfs","created":"2025-11-19T02:35:53.895686015Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"d8a2b8d0e3fa887b4bde6ebcfa5d21cabbbcde892d351228350666759e15659a","io.kubernetes.cri.sandbox-log-di
rectory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-239505_fbb8d14132830afcf406a097e2f2b384","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-239505","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fbb8d14132830afcf406a097e2f2b384"},"owner":"root"},{"ociVersion":"1.2.1","id":"f1fae129650bdbd767c99285efd7d9eb755597dfbc088ab6ee9224ecf8ddf166","pid":972,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1fae129650bdbd767c99285efd7d9eb755597dfbc088ab6ee9224ecf8ddf166","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1fae129650bdbd767c99285efd7d9eb755597dfbc088ab6ee9224ecf8ddf166/rootfs","created":"2025-11-19T02:35:54.017922437Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"d8a2b8d0e3fa887b4bde6ebcfa5d2
1cabbbcde892d351228350666759e15659a","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-239505","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fbb8d14132830afcf406a097e2f2b384"},"owner":"root"}]
	I1119 02:35:54.082633  347381 cri.go:126] list returned 8 containers
	I1119 02:35:54.082651  347381 cri.go:129] container: {ID:0a5f67efee16c45c483b238e0efa2ed85d84df9112a6f64b6784b2ab24654773 Status:running}
	I1119 02:35:54.082692  347381 cri.go:131] skipping 0a5f67efee16c45c483b238e0efa2ed85d84df9112a6f64b6784b2ab24654773 - not in ps
	I1119 02:35:54.082717  347381 cri.go:129] container: {ID:394de3b4f48c01a85e5b574db877f5c3334a77a0463ddbe582de92b32e2833d7 Status:running}
	I1119 02:35:54.082730  347381 cri.go:135] skipping {394de3b4f48c01a85e5b574db877f5c3334a77a0463ddbe582de92b32e2833d7 running}: state = "running", want "paused"
	I1119 02:35:54.082750  347381 cri.go:129] container: {ID:3c1254266e465449e256adf1cd3f71dc93083029bcb62014a050338c640350bf Status:running}
	I1119 02:35:54.082757  347381 cri.go:131] skipping 3c1254266e465449e256adf1cd3f71dc93083029bcb62014a050338c640350bf - not in ps
	I1119 02:35:54.082773  347381 cri.go:129] container: {ID:44349b33052492aa2604afff47ff7a4ddcf5219f43a47923fa749dbb520c7b05 Status:running}
	I1119 02:35:54.082798  347381 cri.go:135] skipping {44349b33052492aa2604afff47ff7a4ddcf5219f43a47923fa749dbb520c7b05 running}: state = "running", want "paused"
	I1119 02:35:54.082811  347381 cri.go:129] container: {ID:896e7ba9918c57df7870e60ee2adfd4ca68e4338b6a5c1f04c9a3cb7d1674332 Status:running}
	I1119 02:35:54.082819  347381 cri.go:135] skipping {896e7ba9918c57df7870e60ee2adfd4ca68e4338b6a5c1f04c9a3cb7d1674332 running}: state = "running", want "paused"
	I1119 02:35:54.082830  347381 cri.go:129] container: {ID:bfced899161a0337ae5adf67427b33a3b8f27c734b69eda80fd7b34a527982f3 Status:running}
	I1119 02:35:54.082838  347381 cri.go:131] skipping bfced899161a0337ae5adf67427b33a3b8f27c734b69eda80fd7b34a527982f3 - not in ps
	I1119 02:35:54.082847  347381 cri.go:129] container: {ID:d8a2b8d0e3fa887b4bde6ebcfa5d21cabbbcde892d351228350666759e15659a Status:running}
	I1119 02:35:54.082856  347381 cri.go:131] skipping d8a2b8d0e3fa887b4bde6ebcfa5d21cabbbcde892d351228350666759e15659a - not in ps
	I1119 02:35:54.082876  347381 cri.go:129] container: {ID:f1fae129650bdbd767c99285efd7d9eb755597dfbc088ab6ee9224ecf8ddf166 Status:running}
	I1119 02:35:54.082885  347381 cri.go:135] skipping {f1fae129650bdbd767c99285efd7d9eb755597dfbc088ab6ee9224ecf8ddf166 running}: state = "running", want "paused"
	I1119 02:35:54.083183  347381 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:35:54.094559  347381 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 02:35:54.094587  347381 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 02:35:54.094641  347381 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 02:35:54.105647  347381 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:35:54.106382  347381 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-239505" does not appear in /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:35:54.106738  347381 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-11107/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-239505" cluster setting kubeconfig missing "newest-cni-239505" context setting]
	I1119 02:35:54.107347  347381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:35:54.109335  347381 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 02:35:54.120053  347381 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 02:35:54.120094  347381 kubeadm.go:602] duration metric: took 25.499769ms to restartPrimaryControlPlane
	I1119 02:35:54.120107  347381 kubeadm.go:403] duration metric: took 115.955182ms to StartCluster
	I1119 02:35:54.120125  347381 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:35:54.120200  347381 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:35:54.121433  347381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:35:54.121717  347381 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1119 02:35:54.121951  347381 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:35:54.122037  347381 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-239505"
	I1119 02:35:54.122058  347381 addons.go:70] Setting default-storageclass=true in profile "newest-cni-239505"
	I1119 02:35:54.122086  347381 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-239505"
	I1119 02:35:54.122095  347381 addons.go:70] Setting metrics-server=true in profile "newest-cni-239505"
	I1119 02:35:54.122098  347381 addons.go:70] Setting dashboard=true in profile "newest-cni-239505"
	I1119 02:35:54.122109  347381 addons.go:239] Setting addon metrics-server=true in "newest-cni-239505"
	I1119 02:35:54.122116  347381 addons.go:239] Setting addon dashboard=true in "newest-cni-239505"
	W1119 02:35:54.122117  347381 addons.go:248] addon metrics-server should already be in state true
	W1119 02:35:54.122124  347381 addons.go:248] addon dashboard should already be in state true
	I1119 02:35:54.122149  347381 host.go:66] Checking if "newest-cni-239505" exists ...
	I1119 02:35:54.122149  347381 host.go:66] Checking if "newest-cni-239505" exists ...
	I1119 02:35:54.122148  347381 config.go:182] Loaded profile config "newest-cni-239505": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:35:54.122086  347381 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-239505"
	W1119 02:35:54.122284  347381 addons.go:248] addon storage-provisioner should already be in state true
	I1119 02:35:54.122308  347381 host.go:66] Checking if "newest-cni-239505" exists ...
	I1119 02:35:54.122444  347381 cli_runner.go:164] Run: docker container inspect newest-cni-239505 --format={{.State.Status}}
	I1119 02:35:54.122652  347381 cli_runner.go:164] Run: docker container inspect newest-cni-239505 --format={{.State.Status}}
	I1119 02:35:54.122694  347381 cli_runner.go:164] Run: docker container inspect newest-cni-239505 --format={{.State.Status}}
	I1119 02:35:54.122776  347381 cli_runner.go:164] Run: docker container inspect newest-cni-239505 --format={{.State.Status}}
	I1119 02:35:54.124605  347381 out.go:179] * Verifying Kubernetes components...
	I1119 02:35:54.126303  347381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:35:54.152835  347381 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 02:35:54.152844  347381 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:35:54.152882  347381 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1119 02:35:54.154137  347381 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1119 02:35:54.154290  347381 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1119 02:35:54.154358  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:54.154225  347381 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:35:54.154401  347381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:35:54.154433  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:54.156142  347381 addons.go:239] Setting addon default-storageclass=true in "newest-cni-239505"
	W1119 02:35:54.156326  347381 addons.go:248] addon default-storageclass should already be in state true
	I1119 02:35:54.156404  347381 host.go:66] Checking if "newest-cni-239505" exists ...
	I1119 02:35:54.156708  347381 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 02:35:54.157245  347381 cli_runner.go:164] Run: docker container inspect newest-cni-239505 --format={{.State.Status}}
	I1119 02:35:54.157731  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 02:35:54.157749  347381 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 02:35:54.157798  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:54.188718  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:54.190320  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:54.197862  347381 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:35:54.197888  347381 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:35:54.197964  347381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-239505
	I1119 02:35:54.202966  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:54.229773  347381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/newest-cni-239505/id_rsa Username:docker}
	I1119 02:35:54.293796  347381 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:35:54.311195  347381 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:35:54.311273  347381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:35:54.314730  347381 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1119 02:35:54.314754  347381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1119 02:35:54.316243  347381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:35:54.324201  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 02:35:54.324227  347381 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 02:35:54.327720  347381 api_server.go:72] duration metric: took 205.965134ms to wait for apiserver process to appear ...
	I1119 02:35:54.327742  347381 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:35:54.327761  347381 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:35:54.333078  347381 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1119 02:35:54.333111  347381 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1119 02:35:54.342584  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 02:35:54.342783  347381 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 02:35:54.351267  347381 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 02:35:54.351293  347381 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1119 02:35:54.351611  347381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:35:54.359513  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 02:35:54.359539  347381 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 02:35:54.371430  347381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 02:35:54.381932  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 02:35:54.381957  347381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 02:35:54.403472  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 02:35:54.403498  347381 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 02:35:54.431412  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 02:35:54.431437  347381 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 02:35:54.457683  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 02:35:54.457769  347381 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 02:35:54.477806  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 02:35:54.477829  347381 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 02:35:54.495653  347381 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:35:54.495678  347381 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 02:35:54.512120  347381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 02:35:55.625832  347381 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 02:35:55.625866  347381 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 02:35:55.625881  347381 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:35:55.644050  347381 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 02:35:55.644079  347381 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 02:35:55.828689  347381 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:35:55.832958  347381 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:35:55.832987  347381 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:35:56.219881  347381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.903601528s)
	I1119 02:35:56.219948  347381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.868306414s)
	I1119 02:35:56.220191  347381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.848732707s)
	I1119 02:35:56.220217  347381 addons.go:480] Verifying addon metrics-server=true in "newest-cni-239505"
	I1119 02:35:56.220311  347381 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.708152783s)
	I1119 02:35:56.221760  347381 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-239505 addons enable metrics-server
	
	I1119 02:35:56.233075  347381 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1119 02:35:56.234402  347381 addons.go:515] duration metric: took 2.11245939s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1119 02:35:56.328111  347381 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:35:56.332360  347381 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:35:56.332408  347381 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:35:56.827902  347381 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:35:56.833296  347381 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 02:35:56.834340  347381 api_server.go:141] control plane version: v1.34.1
	I1119 02:35:56.834410  347381 api_server.go:131] duration metric: took 2.50665736s to wait for apiserver health ...
	I1119 02:35:56.834426  347381 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:35:56.837834  347381 system_pods.go:59] 9 kube-system pods found
	I1119 02:35:56.837863  347381 system_pods.go:61] "coredns-66bc5c9577-z2w74" [99f74e7f-9a36-4a6a-ac0c-0e60c6ae6208] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 02:35:56.837904  347381 system_pods.go:61] "etcd-newest-cni-239505" [e289db86-17ff-43b8-8efc-7dc7685bc943] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:35:56.837919  347381 system_pods.go:61] "kindnet-xc5xw" [0a431aa6-0127-4041-9a89-b99531aabc57] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 02:35:56.837928  347381 system_pods.go:61] "kube-apiserver-newest-cni-239505" [6cf242dd-09d6-42f1-9dcb-700f6f28e5ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:35:56.837946  347381 system_pods.go:61] "kube-controller-manager-newest-cni-239505" [0225b199-6c83-44b0-8137-10dd97a97ff0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:35:56.837956  347381 system_pods.go:61] "kube-proxy-jq9v9" [dc396cd8-ad47-4e4b-bd85-9aae772343e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 02:35:56.837966  347381 system_pods.go:61] "kube-scheduler-newest-cni-239505" [932b6e95-7566-4c69-a21a-26f7e913cb5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:35:56.837971  347381 system_pods.go:61] "metrics-server-746fcd58dc-dmggt" [2a4cdf1a-9087-4e82-bf14-03c030548aeb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 02:35:56.837978  347381 system_pods.go:61] "storage-provisioner" [002f233f-52fa-4a85-a93d-c871a0172fba] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 02:35:56.838005  347381 system_pods.go:74] duration metric: took 3.567484ms to wait for pod list to return data ...
	I1119 02:35:56.838016  347381 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:35:56.840123  347381 default_sa.go:45] found service account: "default"
	I1119 02:35:56.840145  347381 default_sa.go:55] duration metric: took 2.119586ms for default service account to be created ...
	I1119 02:35:56.840156  347381 kubeadm.go:587] duration metric: took 2.718405976s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 02:35:56.840170  347381 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:35:56.842618  347381 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 02:35:56.842654  347381 node_conditions.go:123] node cpu capacity is 8
	I1119 02:35:56.842669  347381 node_conditions.go:105] duration metric: took 2.494569ms to run NodePressure ...
	I1119 02:35:56.842684  347381 start.go:242] waiting for startup goroutines ...
	I1119 02:35:56.842697  347381 start.go:247] waiting for cluster config update ...
	I1119 02:35:56.842715  347381 start.go:256] writing updated cluster config ...
	I1119 02:35:56.843019  347381 ssh_runner.go:195] Run: rm -f paused
	I1119 02:35:56.891775  347381 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 02:35:56.893703  347381 out.go:179] * Done! kubectl is now configured to use "newest-cni-239505" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	96973ef75ad93       56cc512116c8f       10 seconds ago      Running             busybox                   0                   2789d7cc28880       busybox                                                default
	b3afe30e00aab       52546a367cc9e       15 seconds ago      Running             coredns                   0                   8bc4ff9d8b6f6       coredns-66bc5c9577-8tnd6                               kube-system
	8610ab673ef0a       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   e82c75d2a4f19       storage-provisioner                                    kube-system
	744311138e698       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   13942c625a07a       kindnet-ddmgw                                          kube-system
	e82207bf97f93       fc25172553d79       26 seconds ago      Running             kube-proxy                0                   45243c52824ca       kube-proxy-lk5qw                                       kube-system
	a2a6000099a95       7dd6aaa1717ab       37 seconds ago      Running             kube-scheduler            0                   f3ee1be370bf4       kube-scheduler-default-k8s-diff-port-543625            kube-system
	9781005051618       c80c8dbafe7dd       37 seconds ago      Running             kube-controller-manager   0                   19ab0b6718d4d       kube-controller-manager-default-k8s-diff-port-543625   kube-system
	f4a53d6b3d755       5f1f5298c888d       37 seconds ago      Running             etcd                      0                   b34ed4f5bb328       etcd-default-k8s-diff-port-543625                      kube-system
	4ee73feddb3ba       c3994bc696102       37 seconds ago      Running             kube-apiserver            0                   25288b9b95008       kube-apiserver-default-k8s-diff-port-543625            kube-system
	
	
	==> containerd <==
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.750488233Z" level=info msg="Container 8610ab673ef0af7d8c2680d80e214be7be70891606420a0d565e1f64e7bacad2: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.756483944Z" level=info msg="CreateContainer within sandbox \"e82c75d2a4f19bc67cd719cff710510e5bbe0b04985ac60e3669a094d3a366f7\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"8610ab673ef0af7d8c2680d80e214be7be70891606420a0d565e1f64e7bacad2\""
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.757156664Z" level=info msg="StartContainer for \"8610ab673ef0af7d8c2680d80e214be7be70891606420a0d565e1f64e7bacad2\""
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.758116308Z" level=info msg="connecting to shim 8610ab673ef0af7d8c2680d80e214be7be70891606420a0d565e1f64e7bacad2" address="unix:///run/containerd/s/29404008e72b4c691b4fcb32aa12f6d78ecd3e0226338ddc737bc81c010cf16c" protocol=ttrpc version=3
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.758155146Z" level=info msg="CreateContainer within sandbox \"8bc4ff9d8b6f68e70f335d4de7db1454e887420855b73d68414a79ede27d8bb2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b3afe30e00aab3535a8058dfb8669ebfe41c8ba31866027f1b5648fe117eb425\""
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.758676150Z" level=info msg="StartContainer for \"b3afe30e00aab3535a8058dfb8669ebfe41c8ba31866027f1b5648fe117eb425\""
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.759581042Z" level=info msg="connecting to shim b3afe30e00aab3535a8058dfb8669ebfe41c8ba31866027f1b5648fe117eb425" address="unix:///run/containerd/s/6d0a92216c20298c951bdae7c2e61b6e7e0c7ec19c0fc6b8cd4cfc24eff6e7ae" protocol=ttrpc version=3
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.814533499Z" level=info msg="StartContainer for \"8610ab673ef0af7d8c2680d80e214be7be70891606420a0d565e1f64e7bacad2\" returns successfully"
	Nov 19 02:35:49 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:49.818780462Z" level=info msg="StartContainer for \"b3afe30e00aab3535a8058dfb8669ebfe41c8ba31866027f1b5648fe117eb425\" returns successfully"
	Nov 19 02:35:52 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:52.289127194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2c7a1e56-5397-4855-a23a-6fee9e7c0a32,Namespace:default,Attempt:0,}"
	Nov 19 02:35:52 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:52.333972633Z" level=info msg="connecting to shim 2789d7cc28880c13680d634a036dd2d7cf21653c9760f13f29b8796450a228f0" address="unix:///run/containerd/s/438a7fa212400e014d875c82c7cc5629c1fd1d192e60a69ea4c2318c6798f4e8" namespace=k8s.io protocol=ttrpc version=3
	Nov 19 02:35:52 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:52.410189629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2c7a1e56-5397-4855-a23a-6fee9e7c0a32,Namespace:default,Attempt:0,} returns sandbox id \"2789d7cc28880c13680d634a036dd2d7cf21653c9760f13f29b8796450a228f0\""
	Nov 19 02:35:52 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:52.412352818Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.491711780Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.492636123Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396645"
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.494022411Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.497158896Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.497769517Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.085306473s"
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.497818397Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.501981223Z" level=info msg="CreateContainer within sandbox \"2789d7cc28880c13680d634a036dd2d7cf21653c9760f13f29b8796450a228f0\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.509954827Z" level=info msg="Container 96973ef75ad93373dc6e9c31279849e5ac18ee85c927802d51f3eedf214c3a25: CDI devices from CRI Config.CDIDevices: []"
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.516859875Z" level=info msg="CreateContainer within sandbox \"2789d7cc28880c13680d634a036dd2d7cf21653c9760f13f29b8796450a228f0\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"96973ef75ad93373dc6e9c31279849e5ac18ee85c927802d51f3eedf214c3a25\""
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.517695897Z" level=info msg="StartContainer for \"96973ef75ad93373dc6e9c31279849e5ac18ee85c927802d51f3eedf214c3a25\""
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.518645822Z" level=info msg="connecting to shim 96973ef75ad93373dc6e9c31279849e5ac18ee85c927802d51f3eedf214c3a25" address="unix:///run/containerd/s/438a7fa212400e014d875c82c7cc5629c1fd1d192e60a69ea4c2318c6798f4e8" protocol=ttrpc version=3
	Nov 19 02:35:54 default-k8s-diff-port-543625 containerd[659]: time="2025-11-19T02:35:54.582309904Z" level=info msg="StartContainer for \"96973ef75ad93373dc6e9c31279849e5ac18ee85c927802d51f3eedf214c3a25\" returns successfully"
	
	
	==> coredns [b3afe30e00aab3535a8058dfb8669ebfe41c8ba31866027f1b5648fe117eb425] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38191 - 865 "HINFO IN 6651518994578924638.4371494553589656537. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017270142s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-543625
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-543625
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=default-k8s-diff-port-543625
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_35_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:35:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-543625
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:36:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:36:02 +0000   Wed, 19 Nov 2025 02:35:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:36:02 +0000   Wed, 19 Nov 2025 02:35:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:36:02 +0000   Wed, 19 Nov 2025 02:35:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:36:02 +0000   Wed, 19 Nov 2025 02:35:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-543625
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                774ecfee-a138-4988-8e59-3e7123e6ca41
	  Boot ID:                    fea1659d-b751-4f87-a281-819adf52de2d
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-8tnd6                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-default-k8s-diff-port-543625                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-ddmgw                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-default-k8s-diff-port-543625             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-543625    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-lk5qw                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-default-k8s-diff-port-543625             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  33s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node default-k8s-diff-port-543625 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node default-k8s-diff-port-543625 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node default-k8s-diff-port-543625 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node default-k8s-diff-port-543625 event: Registered Node default-k8s-diff-port-543625 in Controller
	  Normal  NodeReady                15s   kubelet          Node default-k8s-diff-port-543625 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[Nov19 02:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 74 0c d7 a6 53 08 06
	[  +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
	[ +28.680399] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 e9 7c 92 36 13 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[Nov19 02:32] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	[  +4.552839] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +11.086189] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 76 d1 26 7f 3d 08 06
	[  +0.000377] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
	[  +9.270754] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a2 49 fd 34 51 3b 08 06
	[  +0.000702] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
	[ +23.593864] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 86 43 5f 18 4c 08 06
	[  +0.000495] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
	
	
	==> etcd [f4a53d6b3d755cc7fc34555d06b48158e4c17a007f93bbb34db6c81a5ec471cb] <==
	{"level":"warn","ts":"2025-11-19T02:35:28.457232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.465859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.475413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.482903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.490981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.499504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.506815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.515844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.524949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.533845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.543284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.548968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.556563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.564097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.570592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.585675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.593507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.601815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.609556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.618964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.627757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.648833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.656953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.667240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:35:28.733182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45386","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:36:05 up  1:18,  0 user,  load average: 3.39, 3.73, 2.69
	Linux default-k8s-diff-port-543625 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [744311138e698615688618d52b4d6fdfb5cf9572c783692108167708987fd1ee] <==
	I1119 02:35:39.051359       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:35:39.051720       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1119 02:35:39.051907       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:35:39.051931       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:35:39.051946       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:35:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:35:39.252119       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:35:39.252149       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:35:39.252160       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:35:39.252323       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:35:39.553023       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:35:39.553064       1 metrics.go:72] Registering metrics
	I1119 02:35:39.553155       1 controller.go:711] "Syncing nftables rules"
	I1119 02:35:49.254332       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:35:49.254467       1 main.go:301] handling current node
	I1119 02:35:59.254528       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 02:35:59.254599       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4ee73feddb3bacb34afffcfab4c3faff9115fd2236ca0b7d4d4cb1c8e2971c8e] <==
	I1119 02:35:29.432205       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:35:29.432499       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 02:35:29.437938       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:35:29.438273       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 02:35:29.440044       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:35:29.528058       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:35:30.228208       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 02:35:30.232246       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 02:35:30.232262       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:35:30.757018       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:35:30.798037       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:35:30.933786       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 02:35:30.940628       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1119 02:35:30.942030       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:35:30.949814       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:35:31.775102       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:35:32.092896       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:35:32.103901       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 02:35:32.112321       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 02:35:37.425665       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 02:35:37.628227       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:35:37.633788       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:35:37.875334       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 02:35:37.875334       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1119 02:36:02.095386       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:57554: use of closed network connection
	
	
	==> kube-controller-manager [97810050516180977cfc43fcfdc3911ff7e97009b1bb289b6963368736c25bf9] <==
	I1119 02:35:36.736837       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 02:35:36.760580       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 02:35:36.772522       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 02:35:36.772535       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 02:35:36.772685       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 02:35:36.772723       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 02:35:36.772730       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 02:35:36.772741       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 02:35:36.772807       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-543625"
	I1119 02:35:36.772854       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 02:35:36.772845       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 02:35:36.772890       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 02:35:36.773052       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 02:35:36.773117       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 02:35:36.773117       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 02:35:36.773286       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 02:35:36.773588       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 02:35:36.773672       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 02:35:36.773683       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 02:35:36.775149       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 02:35:36.777907       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:35:36.779093       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 02:35:36.787413       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 02:35:36.795039       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:35:51.774722       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e82207bf97f9335bd740e2888ec7d6935a54cf62642cb055afc9c70e92001408] <==
	I1119 02:35:38.539596       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:35:38.611890       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:35:38.712100       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:35:38.712140       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1119 02:35:38.712260       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:35:38.734413       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:35:38.734480       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:35:38.739902       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:35:38.740510       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:35:38.740548       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:35:38.741881       1 config.go:200] "Starting service config controller"
	I1119 02:35:38.741913       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:35:38.741923       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:35:38.741957       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:35:38.741980       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:35:38.741986       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:35:38.741980       1 config.go:309] "Starting node config controller"
	I1119 02:35:38.742018       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:35:38.842110       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 02:35:38.842139       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:35:38.842180       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:35:38.842330       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a2a6000099a9578a791b24a5731fecae1b80316d70c109a330f7a9ba40a353a0] <==
	E1119 02:35:29.302412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 02:35:29.302676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 02:35:29.302761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:35:29.302830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:35:29.302880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:35:29.302927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:35:29.302974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 02:35:29.303031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:35:29.304429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:35:30.103055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:35:30.126442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:35:30.155125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 02:35:30.191624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:35:30.224864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 02:35:30.224864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:35:30.235444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 02:35:30.275683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 02:35:30.317567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 02:35:30.362163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:35:30.362284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 02:35:30.391592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 02:35:30.431160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 02:35:30.469591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 02:35:30.553665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1119 02:35:32.688045       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:35:33 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:33.020736    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-543625" podStartSLOduration=1.020712891 podStartE2EDuration="1.020712891s" podCreationTimestamp="2025-11-19 02:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:33.020411814 +0000 UTC m=+1.157324267" watchObservedRunningTime="2025-11-19 02:35:33.020712891 +0000 UTC m=+1.157625324"
	Nov 19 02:35:33 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:33.020918    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-543625" podStartSLOduration=1.020910926 podStartE2EDuration="1.020910926s" podCreationTimestamp="2025-11-19 02:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:33.009091049 +0000 UTC m=+1.146003500" watchObservedRunningTime="2025-11-19 02:35:33.020910926 +0000 UTC m=+1.157823377"
	Nov 19 02:35:33 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:33.030229    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-543625" podStartSLOduration=1.030206154 podStartE2EDuration="1.030206154s" podCreationTimestamp="2025-11-19 02:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:33.030172066 +0000 UTC m=+1.167084519" watchObservedRunningTime="2025-11-19 02:35:33.030206154 +0000 UTC m=+1.167118608"
	Nov 19 02:35:33 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:33.050789    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-543625" podStartSLOduration=1.050765854 podStartE2EDuration="1.050765854s" podCreationTimestamp="2025-11-19 02:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:33.041204013 +0000 UTC m=+1.178116466" watchObservedRunningTime="2025-11-19 02:35:33.050765854 +0000 UTC m=+1.187678306"
	Nov 19 02:35:36 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:36.814739    1455 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 02:35:36 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:36.815447    1455 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 02:35:37 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:37.977143    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88b25c10-b469-410d-8418-e0ceaa17a8ea-lib-modules\") pod \"kindnet-ddmgw\" (UID: \"88b25c10-b469-410d-8418-e0ceaa17a8ea\") " pod="kube-system/kindnet-ddmgw"
	Nov 19 02:35:37 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:37.977203    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5tpn\" (UniqueName: \"kubernetes.io/projected/88b25c10-b469-410d-8418-e0ceaa17a8ea-kube-api-access-w5tpn\") pod \"kindnet-ddmgw\" (UID: \"88b25c10-b469-410d-8418-e0ceaa17a8ea\") " pod="kube-system/kindnet-ddmgw"
	Nov 19 02:35:37 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:37.977228    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/88b25c10-b469-410d-8418-e0ceaa17a8ea-cni-cfg\") pod \"kindnet-ddmgw\" (UID: \"88b25c10-b469-410d-8418-e0ceaa17a8ea\") " pod="kube-system/kindnet-ddmgw"
	Nov 19 02:35:37 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:37.977259    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/36f0dd8e-7095-4d43-b4f3-4a4b11b6f852-kube-proxy\") pod \"kube-proxy-lk5qw\" (UID: \"36f0dd8e-7095-4d43-b4f3-4a4b11b6f852\") " pod="kube-system/kube-proxy-lk5qw"
	Nov 19 02:35:37 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:37.977325    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36f0dd8e-7095-4d43-b4f3-4a4b11b6f852-xtables-lock\") pod \"kube-proxy-lk5qw\" (UID: \"36f0dd8e-7095-4d43-b4f3-4a4b11b6f852\") " pod="kube-system/kube-proxy-lk5qw"
	Nov 19 02:35:37 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:37.977397    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36f0dd8e-7095-4d43-b4f3-4a4b11b6f852-lib-modules\") pod \"kube-proxy-lk5qw\" (UID: \"36f0dd8e-7095-4d43-b4f3-4a4b11b6f852\") " pod="kube-system/kube-proxy-lk5qw"
	Nov 19 02:35:37 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:37.977435    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88b25c10-b469-410d-8418-e0ceaa17a8ea-xtables-lock\") pod \"kindnet-ddmgw\" (UID: \"88b25c10-b469-410d-8418-e0ceaa17a8ea\") " pod="kube-system/kindnet-ddmgw"
	Nov 19 02:35:37 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:37.977463    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfj9v\" (UniqueName: \"kubernetes.io/projected/36f0dd8e-7095-4d43-b4f3-4a4b11b6f852-kube-api-access-rfj9v\") pod \"kube-proxy-lk5qw\" (UID: \"36f0dd8e-7095-4d43-b4f3-4a4b11b6f852\") " pod="kube-system/kube-proxy-lk5qw"
	Nov 19 02:35:39 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:39.007210    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-ddmgw" podStartSLOduration=2.007188518 podStartE2EDuration="2.007188518s" podCreationTimestamp="2025-11-19 02:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:39.006431519 +0000 UTC m=+7.143343972" watchObservedRunningTime="2025-11-19 02:35:39.007188518 +0000 UTC m=+7.144100949"
	Nov 19 02:35:39 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:39.016661    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lk5qw" podStartSLOduration=2.016643307 podStartE2EDuration="2.016643307s" podCreationTimestamp="2025-11-19 02:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:39.016280341 +0000 UTC m=+7.153192796" watchObservedRunningTime="2025-11-19 02:35:39.016643307 +0000 UTC m=+7.153555755"
	Nov 19 02:35:49 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:49.307038    1455 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 02:35:49 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:49.361234    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb5vf\" (UniqueName: \"kubernetes.io/projected/b72767a4-d2fe-420c-9cd5-7877ae681fd5-kube-api-access-zb5vf\") pod \"storage-provisioner\" (UID: \"b72767a4-d2fe-420c-9cd5-7877ae681fd5\") " pod="kube-system/storage-provisioner"
	Nov 19 02:35:49 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:49.361310    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01ac50b7-4308-4544-8340-ae41c3dd2992-config-volume\") pod \"coredns-66bc5c9577-8tnd6\" (UID: \"01ac50b7-4308-4544-8340-ae41c3dd2992\") " pod="kube-system/coredns-66bc5c9577-8tnd6"
	Nov 19 02:35:49 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:49.361341    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwzhc\" (UniqueName: \"kubernetes.io/projected/01ac50b7-4308-4544-8340-ae41c3dd2992-kube-api-access-vwzhc\") pod \"coredns-66bc5c9577-8tnd6\" (UID: \"01ac50b7-4308-4544-8340-ae41c3dd2992\") " pod="kube-system/coredns-66bc5c9577-8tnd6"
	Nov 19 02:35:49 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:49.361434    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b72767a4-d2fe-420c-9cd5-7877ae681fd5-tmp\") pod \"storage-provisioner\" (UID: \"b72767a4-d2fe-420c-9cd5-7877ae681fd5\") " pod="kube-system/storage-provisioner"
	Nov 19 02:35:50 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:50.047162    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8tnd6" podStartSLOduration=13.047135205 podStartE2EDuration="13.047135205s" podCreationTimestamp="2025-11-19 02:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:50.035856263 +0000 UTC m=+18.172768715" watchObservedRunningTime="2025-11-19 02:35:50.047135205 +0000 UTC m=+18.184047702"
	Nov 19 02:35:51 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:51.968876    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.968850472 podStartE2EDuration="13.968850472s" podCreationTimestamp="2025-11-19 02:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:35:50.057869094 +0000 UTC m=+18.194781547" watchObservedRunningTime="2025-11-19 02:35:51.968850472 +0000 UTC m=+20.105762908"
	Nov 19 02:35:52 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:52.081882    1455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmtcx\" (UniqueName: \"kubernetes.io/projected/2c7a1e56-5397-4855-a23a-6fee9e7c0a32-kube-api-access-rmtcx\") pod \"busybox\" (UID: \"2c7a1e56-5397-4855-a23a-6fee9e7c0a32\") " pod="default/busybox"
	Nov 19 02:35:55 default-k8s-diff-port-543625 kubelet[1455]: I1119 02:35:55.048595    1455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.9616222429999999 podStartE2EDuration="4.048577252s" podCreationTimestamp="2025-11-19 02:35:51 +0000 UTC" firstStartedPulling="2025-11-19 02:35:52.411824196 +0000 UTC m=+20.548736643" lastFinishedPulling="2025-11-19 02:35:54.498779207 +0000 UTC m=+22.635691652" observedRunningTime="2025-11-19 02:35:55.048406308 +0000 UTC m=+23.185318761" watchObservedRunningTime="2025-11-19 02:35:55.048577252 +0000 UTC m=+23.185489700"
	
	
	==> storage-provisioner [8610ab673ef0af7d8c2680d80e214be7be70891606420a0d565e1f64e7bacad2] <==
	I1119 02:35:49.823321       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:35:49.831350       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:35:49.831412       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 02:35:49.834145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:49.839499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:35:49.839634       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:35:49.839814       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-543625_07de42df-e823-4a34-99b0-901998666d9e!
	I1119 02:35:49.839811       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4392a95d-b5b8-4658-a497-3ce97f257fa7", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-543625_07de42df-e823-4a34-99b0-901998666d9e became leader
	W1119 02:35:49.841940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:49.845445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:35:49.940353       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-543625_07de42df-e823-4a34-99b0-901998666d9e!
	W1119 02:35:51.848539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:51.856831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:53.861799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:53.867140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:55.870810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:55.876681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:57.879801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:57.884618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:59.888265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:35:59.893555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:36:01.896972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:36:01.901710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:36:03.905564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:36:03.909970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-543625 -n default-k8s-diff-port-543625
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-543625 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.86s)

                                                
                                    

Test pass (302/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 12.12
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 11.19
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.24
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 0.42
21 TestBinaryMirror 0.83
22 TestOffline 56.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 124.89
29 TestAddons/serial/Volcano 40.25
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 10.46
35 TestAddons/parallel/Registry 15.32
36 TestAddons/parallel/RegistryCreds 0.65
37 TestAddons/parallel/Ingress 20.95
38 TestAddons/parallel/InspektorGadget 10.65
39 TestAddons/parallel/MetricsServer 5.64
41 TestAddons/parallel/CSI 48.28
42 TestAddons/parallel/Headlamp 17.48
43 TestAddons/parallel/CloudSpanner 5.54
44 TestAddons/parallel/LocalPath 12.22
45 TestAddons/parallel/NvidiaDevicePlugin 6.5
46 TestAddons/parallel/Yakd 10.72
47 TestAddons/parallel/AmdGpuDevicePlugin 5.5
48 TestAddons/StoppedEnableDisable 12.27
49 TestCertOptions 25.13
50 TestCertExpiration 216.4
52 TestForceSystemdFlag 25.82
53 TestForceSystemdEnv 26.81
54 TestDockerEnvContainerd 36.21
58 TestErrorSpam/setup 21.06
59 TestErrorSpam/start 0.68
60 TestErrorSpam/status 0.95
61 TestErrorSpam/pause 1.48
62 TestErrorSpam/unpause 1.53
63 TestErrorSpam/stop 1.5
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 39.5
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.15
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.21
75 TestFunctional/serial/CacheCmd/cache/add_local 1.94
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 47.15
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.25
86 TestFunctional/serial/LogsFileCmd 1.26
87 TestFunctional/serial/InvalidService 3.89
89 TestFunctional/parallel/ConfigCmd 0.41
90 TestFunctional/parallel/DashboardCmd 12.29
91 TestFunctional/parallel/DryRun 0.5
92 TestFunctional/parallel/InternationalLanguage 0.23
93 TestFunctional/parallel/StatusCmd 1.09
97 TestFunctional/parallel/ServiceCmdConnect 8.55
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 30.1
101 TestFunctional/parallel/SSHCmd 0.56
102 TestFunctional/parallel/CpCmd 1.66
103 TestFunctional/parallel/MySQL 19.1
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.73
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
113 TestFunctional/parallel/License 0.59
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.5
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.93
121 TestFunctional/parallel/ImageCommands/Setup 1.82
122 TestFunctional/parallel/ServiceCmd/DeployApp 9.14
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.09
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.82
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
127 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.62
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
131 TestFunctional/parallel/ProfileCmd/profile_list 0.4
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
133 TestFunctional/parallel/MountCmd/any-port 9.36
134 TestFunctional/parallel/ServiceCmd/List 0.4
135 TestFunctional/parallel/ServiceCmd/JSONOutput 0.4
136 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
137 TestFunctional/parallel/ServiceCmd/Format 0.41
138 TestFunctional/parallel/ServiceCmd/URL 0.39
140 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
141 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.24
144 TestFunctional/parallel/MountCmd/specific-port 2.15
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.82
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 108.33
163 TestMultiControlPlane/serial/DeployApp 5.58
164 TestMultiControlPlane/serial/PingHostFromPods 1.18
165 TestMultiControlPlane/serial/AddWorkerNode 24.27
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
168 TestMultiControlPlane/serial/CopyFile 17.18
169 TestMultiControlPlane/serial/StopSecondaryNode 12.73
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
171 TestMultiControlPlane/serial/RestartSecondaryNode 9.07
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 96.7
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.43
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
176 TestMultiControlPlane/serial/StopCluster 36.26
177 TestMultiControlPlane/serial/RestartCluster 54.32
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
179 TestMultiControlPlane/serial/AddSecondaryNode 67.07
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
185 TestJSONOutput/start/Command 39.41
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.7
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.61
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.88
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 35.05
211 TestKicCustomNetwork/use_default_bridge_network 23.26
212 TestKicExistingNetwork 24.22
213 TestKicCustomSubnet 24.35
214 TestKicStaticIP 24.11
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 50.17
219 TestMountStart/serial/StartWithMountFirst 4.57
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 5.03
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.68
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.27
226 TestMountStart/serial/RestartStopped 7.67
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 62.89
231 TestMultiNode/serial/DeployApp2Nodes 4.54
232 TestMultiNode/serial/PingHostFrom2Pods 0.81
233 TestMultiNode/serial/AddNode 53.11
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.66
236 TestMultiNode/serial/CopyFile 9.88
237 TestMultiNode/serial/StopNode 2.27
238 TestMultiNode/serial/StartAfterStop 6.98
239 TestMultiNode/serial/RestartKeepsNodes 71.64
240 TestMultiNode/serial/DeleteNode 5.27
241 TestMultiNode/serial/StopMultiNode 24.01
242 TestMultiNode/serial/RestartMultiNode 44.51
243 TestMultiNode/serial/ValidateNameConflict 23.23
248 TestPreload 113.82
250 TestScheduledStopUnix 94.91
253 TestInsufficientStorage 9.41
254 TestRunningBinaryUpgrade 51.83
257 TestMissingContainerUpgrade 138.05
258 TestStoppedBinaryUpgrade/Setup 2.99
259 TestStoppedBinaryUpgrade/Upgrade 102.32
267 TestNetworkPlugins/group/false 4.36
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.34
280 TestPause/serial/Start 41.04
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
283 TestNoKubernetes/serial/StartWithK8s 22.97
284 TestNoKubernetes/serial/StartWithStopK8s 16.13
285 TestPause/serial/SecondStartNoReconfiguration 5.84
286 TestPause/serial/Pause 0.8
287 TestPause/serial/VerifyStatus 0.4
288 TestPause/serial/Unpause 0.67
289 TestPause/serial/PauseAgain 0.78
290 TestPause/serial/DeletePaused 3.21
291 TestPause/serial/VerifyDeletedResources 0.71
292 TestNoKubernetes/serial/Start 7.97
293 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
294 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
295 TestNoKubernetes/serial/ProfileList 1.69
296 TestNoKubernetes/serial/Stop 1.32
297 TestNoKubernetes/serial/StartNoArgs 7.11
298 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
299 TestNetworkPlugins/group/auto/Start 41.79
300 TestNetworkPlugins/group/kindnet/Start 42.36
301 TestNetworkPlugins/group/auto/KubeletFlags 0.33
302 TestNetworkPlugins/group/auto/NetCatPod 9.2
303 TestNetworkPlugins/group/auto/DNS 0.13
304 TestNetworkPlugins/group/auto/Localhost 0.11
305 TestNetworkPlugins/group/auto/HairPin 0.1
306 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
307 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
308 TestNetworkPlugins/group/kindnet/NetCatPod 8.35
309 TestNetworkPlugins/group/calico/Start 51.36
310 TestNetworkPlugins/group/kindnet/DNS 0.13
311 TestNetworkPlugins/group/kindnet/Localhost 0.11
312 TestNetworkPlugins/group/kindnet/HairPin 0.11
313 TestNetworkPlugins/group/custom-flannel/Start 50.9
314 TestNetworkPlugins/group/calico/ControllerPod 6.01
315 TestNetworkPlugins/group/calico/KubeletFlags 0.3
316 TestNetworkPlugins/group/calico/NetCatPod 8.18
317 TestNetworkPlugins/group/calico/DNS 0.13
318 TestNetworkPlugins/group/calico/Localhost 0.11
319 TestNetworkPlugins/group/calico/HairPin 0.11
320 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
321 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.2
322 TestNetworkPlugins/group/custom-flannel/DNS 0.14
323 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
324 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
325 TestNetworkPlugins/group/enable-default-cni/Start 66.58
326 TestNetworkPlugins/group/flannel/Start 51.38
327 TestNetworkPlugins/group/bridge/Start 63.08
328 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
329 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.19
330 TestNetworkPlugins/group/flannel/ControllerPod 6.01
331 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
332 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
333 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
334 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
335 TestNetworkPlugins/group/flannel/NetCatPod 8.18
336 TestNetworkPlugins/group/flannel/DNS 0.15
337 TestNetworkPlugins/group/flannel/Localhost 0.11
338 TestNetworkPlugins/group/flannel/HairPin 0.12
340 TestStartStop/group/old-k8s-version/serial/FirstStart 52.33
341 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
342 TestNetworkPlugins/group/bridge/NetCatPod 11.26
344 TestStartStop/group/no-preload/serial/FirstStart 53.81
345 TestNetworkPlugins/group/bridge/DNS 0.15
346 TestNetworkPlugins/group/bridge/Localhost 0.13
347 TestNetworkPlugins/group/bridge/HairPin 0.16
349 TestStartStop/group/embed-certs/serial/FirstStart 41.94
352 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.9
353 TestStartStop/group/old-k8s-version/serial/Stop 12.06
355 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.82
356 TestStartStop/group/no-preload/serial/Stop 12.15
357 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
358 TestStartStop/group/old-k8s-version/serial/SecondStart 47.33
359 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.29
360 TestStartStop/group/no-preload/serial/SecondStart 48.17
361 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.89
362 TestStartStop/group/embed-certs/serial/Stop 12.13
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
364 TestStartStop/group/embed-certs/serial/SecondStart 44.6
365 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
366 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
367 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
368 TestStartStop/group/old-k8s-version/serial/Pause 2.8
369 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
371 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.4
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
373 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
375 TestStartStop/group/no-preload/serial/Pause 3.08
377 TestStartStop/group/newest-cni/serial/FirstStart 26.22
378 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
379 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
380 TestStartStop/group/embed-certs/serial/Pause 3.22
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.78
383 TestStartStop/group/newest-cni/serial/Stop 1.3
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
385 TestStartStop/group/newest-cni/serial/SecondStart 9.91
387 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
389 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
390 TestStartStop/group/newest-cni/serial/Pause 2.58
391 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.8
392 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.04
393 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
394 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.81
395 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
396 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
397 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
398 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.71
x
+
TestDownloadOnly/v1.28.0/json-events (12.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-696667 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-696667 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.117184103s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (12.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1119 01:56:32.650299   14657 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1119 01:56:32.650401   14657 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-696667
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-696667: exit status 85 (68.866901ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-696667 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-696667 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 01:56:20
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 01:56:20.584162   14669 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:56:20.584308   14669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:20.584313   14669 out.go:374] Setting ErrFile to fd 2...
	I1119 01:56:20.584317   14669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:20.584494   14669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	W1119 01:56:20.584615   14669 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21924-11107/.minikube/config/config.json: open /home/jenkins/minikube-integration/21924-11107/.minikube/config/config.json: no such file or directory
	I1119 01:56:20.585092   14669 out.go:368] Setting JSON to true
	I1119 01:56:20.585960   14669 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2321,"bootTime":1763515060,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 01:56:20.586049   14669 start.go:143] virtualization: kvm guest
	I1119 01:56:20.588389   14669 out.go:99] [download-only-696667] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 01:56:20.588526   14669 notify.go:221] Checking for updates...
	W1119 01:56:20.588529   14669 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball: no such file or directory
	I1119 01:56:20.590079   14669 out.go:171] MINIKUBE_LOCATION=21924
	I1119 01:56:20.591527   14669 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 01:56:20.592990   14669 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 01:56:20.594333   14669 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	I1119 01:56:20.595753   14669 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1119 01:56:20.598634   14669 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1119 01:56:20.598924   14669 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 01:56:20.626287   14669 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 01:56:20.626412   14669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:56:21.000762   14669 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-19 01:56:20.99086124 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 01:56:21.000860   14669 docker.go:319] overlay module found
	I1119 01:56:21.002836   14669 out.go:99] Using the docker driver based on user configuration
	I1119 01:56:21.002873   14669 start.go:309] selected driver: docker
	I1119 01:56:21.002880   14669 start.go:930] validating driver "docker" against <nil>
	I1119 01:56:21.002980   14669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:56:21.063235   14669 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-19 01:56:21.053097783 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 01:56:21.063399   14669 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 01:56:21.063905   14669 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1119 01:56:21.064080   14669 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 01:56:21.065923   14669 out.go:171] Using Docker driver with root privileges
	I1119 01:56:21.067311   14669 cni.go:84] Creating CNI manager for ""
	I1119 01:56:21.067407   14669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 01:56:21.067427   14669 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 01:56:21.067497   14669 start.go:353] cluster config:
	{Name:download-only-696667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-696667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 01:56:21.068931   14669 out.go:99] Starting "download-only-696667" primary control-plane node in "download-only-696667" cluster
	I1119 01:56:21.068956   14669 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 01:56:21.070233   14669 out.go:99] Pulling base image v0.0.48-1763507788-21924 ...
	I1119 01:56:21.070283   14669 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1119 01:56:21.070383   14669 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 01:56:21.087670   14669 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1119 01:56:21.088023   14669 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1119 01:56:21.088138   14669 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1119 01:56:21.163853   14669 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1119 01:56:21.163885   14669 cache.go:65] Caching tarball of preloaded images
	I1119 01:56:21.164080   14669 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1119 01:56:21.166610   14669 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1119 01:56:21.166639   14669 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1119 01:56:21.275965   14669 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1119 01:56:21.276087   14669 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-696667 host does not exist
	  To start a cluster, run: "minikube start -p download-only-696667"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-696667
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-746573 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-746573 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.189733076s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1119 01:56:44.275821   14657 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1119 01:56:44.275863   14657 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-746573
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-746573: exit status 85 (74.595479ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-696667 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-696667 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:56 UTC │
	│ delete  │ -p download-only-696667                                                                                                                                                               │ download-only-696667 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │ 19 Nov 25 01:56 UTC │
	│ start   │ -o=json --download-only -p download-only-746573 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-746573 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 01:56:33
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 01:56:33.138135   15043 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:56:33.138458   15043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:33.138466   15043 out.go:374] Setting ErrFile to fd 2...
	I1119 01:56:33.138472   15043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:33.139024   15043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 01:56:33.139584   15043 out.go:368] Setting JSON to true
	I1119 01:56:33.140357   15043 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2333,"bootTime":1763515060,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 01:56:33.140432   15043 start.go:143] virtualization: kvm guest
	I1119 01:56:33.142306   15043 out.go:99] [download-only-746573] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 01:56:33.142465   15043 notify.go:221] Checking for updates...
	I1119 01:56:33.143436   15043 out.go:171] MINIKUBE_LOCATION=21924
	I1119 01:56:33.144757   15043 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 01:56:33.146041   15043 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 01:56:33.147360   15043 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	I1119 01:56:33.148567   15043 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1119 01:56:33.150791   15043 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1119 01:56:33.151008   15043 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 01:56:33.175192   15043 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 01:56:33.175267   15043 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:56:33.233387   15043 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-19 01:56:33.223571049 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 01:56:33.233489   15043 docker.go:319] overlay module found
	I1119 01:56:33.235166   15043 out.go:99] Using the docker driver based on user configuration
	I1119 01:56:33.235196   15043 start.go:309] selected driver: docker
	I1119 01:56:33.235202   15043 start.go:930] validating driver "docker" against <nil>
	I1119 01:56:33.235290   15043 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:56:33.291791   15043 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-19 01:56:33.282290738 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 01:56:33.292017   15043 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 01:56:33.292702   15043 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1119 01:56:33.292890   15043 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 01:56:33.294677   15043 out.go:171] Using Docker driver with root privileges
	I1119 01:56:33.296053   15043 cni.go:84] Creating CNI manager for ""
	I1119 01:56:33.296147   15043 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1119 01:56:33.296166   15043 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 01:56:33.296237   15043 start.go:353] cluster config:
	{Name:download-only-746573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-746573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 01:56:33.297654   15043 out.go:99] Starting "download-only-746573" primary control-plane node in "download-only-746573" cluster
	I1119 01:56:33.297676   15043 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1119 01:56:33.299019   15043 out.go:99] Pulling base image v0.0.48-1763507788-21924 ...
	I1119 01:56:33.299061   15043 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 01:56:33.299173   15043 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 01:56:33.316644   15043 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1119 01:56:33.316763   15043 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1119 01:56:33.316784   15043 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1119 01:56:33.316791   15043 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1119 01:56:33.316799   15043 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1119 01:56:33.645995   15043 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1119 01:56:33.646043   15043 cache.go:65] Caching tarball of preloaded images
	I1119 01:56:33.646206   15043 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1119 01:56:33.648190   15043 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1119 01:56:33.648207   15043 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1119 01:56:33.744201   15043 preload.go:295] Got checksum from GCS API "5d6e976daeaa84851976fc4d674fd8f4"
	I1119 01:56:33.744244   15043 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:5d6e976daeaa84851976fc4d674fd8f4 -> /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-746573 host does not exist
	  To start a cluster, run: "minikube start -p download-only-746573"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-746573
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-069032 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-069032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-069032
--- PASS: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestBinaryMirror (0.83s)

                                                
                                                
=== RUN   TestBinaryMirror
I1119 01:56:45.450935   14657 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-267850 --alsologtostderr --binary-mirror http://127.0.0.1:35209 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-267850" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-267850
--- PASS: TestBinaryMirror (0.83s)

                                                
                                    
x
+
TestOffline (56.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-844028 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-844028 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (49.436347115s)
helpers_test.go:175: Cleaning up "offline-containerd-844028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-844028
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-844028: (7.157640351s)
--- PASS: TestOffline (56.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-168589
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-168589: exit status 85 (62.146014ms)

                                                
                                                
-- stdout --
	* Profile "addons-168589" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-168589"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-168589
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-168589: exit status 85 (63.302661ms)

                                                
                                                
-- stdout --
	* Profile "addons-168589" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-168589"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (124.89s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-168589 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-168589 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m4.886216638s)
--- PASS: TestAddons/Setup (124.89s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 17.240944ms
addons_test.go:876: volcano-admission stabilized in 17.299181ms
addons_test.go:868: volcano-scheduler stabilized in 17.531904ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-x5n42" [f0d87f19-4223-46d6-a18f-0184f15664e5] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004026752s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-5lwmj" [c04982ba-6e3d-4cea-8e44-5745bcf6c5ef] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003530962s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-fjmdw" [6f066d30-5fd9-4c64-82cb-4bc63eea39ad] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003152847s
addons_test.go:903: (dbg) Run:  kubectl --context addons-168589 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-168589 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-168589 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [9d8adee1-2685-4e00-baad-22e341f598d3] Pending
helpers_test.go:352: "test-job-nginx-0" [9d8adee1-2685-4e00-baad-22e341f598d3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [9d8adee1-2685-4e00-baad-22e341f598d3] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003280928s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-168589 addons disable volcano --alsologtostderr -v=1: (11.858925255s)
--- PASS: TestAddons/serial/Volcano (40.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-168589 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-168589 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-168589 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-168589 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cca69a16-c95f-4449-b968-719eb6ff0e5a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [cca69a16-c95f-4449-b968-719eb6ff0e5a] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003621271s
addons_test.go:694: (dbg) Run:  kubectl --context addons-168589 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-168589 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-168589 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.056588ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-z9rtf" [101fb4ec-ce75-47c2-9748-b94e863a6811] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003210921s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-2mznv" [fcedfc61-81b5-43e5-bf8e-a9d91f69648f] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003007793s
addons_test.go:392: (dbg) Run:  kubectl --context addons-168589 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-168589 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-168589 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.485190602s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.32s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.048411ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-168589
addons_test.go:332: (dbg) Run:  kubectl --context addons-168589 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-168589 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-168589 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-168589 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [952abd99-6e7e-454d-8c43-29239e314e32] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [952abd99-6e7e-454d-8c43-29239e314e32] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.002892918s
I1119 02:00:30.803578   14657 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-168589 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-168589 addons disable ingress --alsologtostderr -v=1: (7.70453176s)
--- PASS: TestAddons/parallel/Ingress (20.95s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.65s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-hbkqz" [5dbb9cf3-d608-4ec6-b145-dfb13e310c83] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003929125s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-168589 addons disable inspektor-gadget --alsologtostderr -v=1: (5.647203745s)
--- PASS: TestAddons/parallel/InspektorGadget (10.65s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.389733ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-bnpxc" [be7daa5a-664d-4338-aa4f-da506752ea6d] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003139659s
addons_test.go:463: (dbg) Run:  kubectl --context addons-168589 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.64s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.299739ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-168589 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-168589 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [64663c81-bd52-4d7c-8c0e-39fd14a8d36a] Pending
helpers_test.go:352: "task-pv-pod" [64663c81-bd52-4d7c-8c0e-39fd14a8d36a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [64663c81-bd52-4d7c-8c0e-39fd14a8d36a] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003852464s
addons_test.go:572: (dbg) Run:  kubectl --context addons-168589 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-168589 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-168589 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-168589 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-168589 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-168589 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
2025/11/19 02:00:05 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-168589 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [3b61ab66-50ac-4597-abba-d35e35ce9811] Pending
helpers_test.go:352: "task-pv-pod-restore" [3b61ab66-50ac-4597-abba-d35e35ce9811] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [3b61ab66-50ac-4597-abba-d35e35ce9811] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.004117758s
addons_test.go:614: (dbg) Run:  kubectl --context addons-168589 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-168589 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-168589 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-168589 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.571383721s)
--- PASS: TestAddons/parallel/CSI (48.28s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-168589 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-p9g22" [7a465c7f-6be3-4120-98de-c16ac4055512] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-p9g22" [7a465c7f-6be3-4120-98de-c16ac4055512] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003297134s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-168589 addons disable headlamp --alsologtostderr -v=1: (5.709373321s)
--- PASS: TestAddons/parallel/Headlamp (17.48s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-rhrcj" [beeab00b-cc99-412e-8d3f-607cfb9a4e0b] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00302422s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.22s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-168589 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-168589 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-168589 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [b5ed76cb-85ed-4a35-b526-54568e8a4d08] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [b5ed76cb-85ed-4a35-b526-54568e8a4d08] Running
helpers_test.go:352: "test-local-path" [b5ed76cb-85ed-4a35-b526-54568e8a4d08] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [b5ed76cb-85ed-4a35-b526-54568e8a4d08] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003369716s
addons_test.go:967: (dbg) Run:  kubectl --context addons-168589 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 ssh "cat /opt/local-path-provisioner/pvc-7e65202b-ad54-4d64-b65d-a91db422a2f9_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-168589 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-168589 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.22s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
I1119 01:59:50.975760   14657 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-mprk6" [59c72d09-312b-448a-8789-40e3e1e4c058] Running
I1119 01:59:50.979014   14657 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1119 01:59:50.979033   14657 kapi.go:107] duration metric: took 3.289274ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003899952s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-cqwdj" [b62eac20-4402-4fa5-9388-8c8d9312ddde] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003820462s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-168589 addons disable yakd --alsologtostderr -v=1: (5.715360548s)
--- PASS: TestAddons/parallel/Yakd (10.72s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-z9v8q" [f9384133-5263-4599-b4e6-bf2c23273f5b] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003558734s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-168589 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.50s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-168589
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-168589: (11.987131794s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-168589
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-168589
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-168589
--- PASS: TestAddons/StoppedEnableDisable (12.27s)

                                                
                                    
x
+
TestCertOptions (25.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-161228 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-161228 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (22.404183496s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-161228 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-161228 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-161228 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-161228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-161228
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-161228: (2.024103399s)
--- PASS: TestCertOptions (25.13s)

                                                
                                    
x
+
TestCertExpiration (216.4s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-542721 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-542721 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (27.943983506s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-542721 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-542721 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.917569584s)
helpers_test.go:175: Cleaning up "cert-expiration-542721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-542721
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-542721: (2.534734664s)
--- PASS: TestCertExpiration (216.40s)

                                                
                                    
x
+
TestForceSystemdFlag (25.82s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-683517 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-683517 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (23.361615923s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-683517 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-683517" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-683517
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-683517: (2.132697161s)
--- PASS: TestForceSystemdFlag (25.82s)

                                                
                                    
x
+
TestForceSystemdEnv (26.81s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-541690 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-541690 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (23.196450099s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-541690 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-541690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-541690
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-541690: (3.249107039s)
--- PASS: TestForceSystemdEnv (26.81s)

                                                
                                    
x
+
TestDockerEnvContainerd (36.21s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-431976 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-431976 --driver=docker  --container-runtime=containerd: (19.988951443s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-431976"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-431976": (1.017129329s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXX57IlCW/agent.38359" SSH_AGENT_PID="38360" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXX57IlCW/agent.38359" SSH_AGENT_PID="38360" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXX57IlCW/agent.38359" SSH_AGENT_PID="38360" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.918614874s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXX57IlCW/agent.38359" SSH_AGENT_PID="38360" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-431976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-431976
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-431976: (2.329681876s)
--- PASS: TestDockerEnvContainerd (36.21s)

                                                
                                    
x
+
TestErrorSpam/setup (21.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-674859 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-674859 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-674859 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-674859 --driver=docker  --container-runtime=containerd: (21.058663386s)
--- PASS: TestErrorSpam/setup (21.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-674859 --log_dir /tmp/nospam-674859 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-674859 --log_dir /tmp/nospam-674859 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-674859 --log_dir /tmp/nospam-674859 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-674859 --log_dir /tmp/nospam-674859 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-674859 --log_dir /tmp/nospam-674859 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-674859 --log_dir /tmp/nospam-674859 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-674859 --log_dir /tmp/nospam-674859 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-674859 --log_dir /tmp/nospam-674859 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-674859 --log_dir /tmp/nospam-674859 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-674859 --log_dir /tmp/nospam-674859 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-674859 --log_dir /tmp/nospam-674859 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-674859 --log_dir /tmp/nospam-674859 unpause
--- PASS: TestErrorSpam/unpause (1.53s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-674859 --log_dir /tmp/nospam-674859 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-674859 --log_dir /tmp/nospam-674859 stop: (1.290593517s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-674859 --log_dir /tmp/nospam-674859 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-674859 --log_dir /tmp/nospam-674859 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/test/nested/copy/14657/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.5s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-266785 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-266785 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (39.501168377s)
--- PASS: TestFunctional/serial/StartWithProxy (39.50s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1119 02:02:46.538517   14657 config.go:182] Loaded profile config "functional-266785": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-266785 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-266785 --alsologtostderr -v=8: (6.150632154s)
functional_test.go:678: soft start took 6.151387671s for "functional-266785" cluster.
I1119 02:02:52.689596   14657 config.go:182] Loaded profile config "functional-266785": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-266785 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-266785 cache add registry.k8s.io/pause:3.1: (1.131032536s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-266785 cache add registry.k8s.io/pause:3.3: (1.101377248s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-266785 /tmp/TestFunctionalserialCacheCmdcacheadd_local4272865546/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 cache add minikube-local-cache-test:functional-266785
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-266785 cache add minikube-local-cache-test:functional-266785: (1.573506962s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 cache delete minikube-local-cache-test:functional-266785
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-266785
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-266785 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (289.726798ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 kubectl -- --context functional-266785 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-266785 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-266785 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-266785 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.15055368s)
functional_test.go:776: restart took 47.150692811s for "functional-266785" cluster.
I1119 02:03:47.542494   14657 config.go:182] Loaded profile config "functional-266785": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (47.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-266785 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-266785 logs: (1.249011165s)
--- PASS: TestFunctional/serial/LogsCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 logs --file /tmp/TestFunctionalserialLogsFileCmd3477338905/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-266785 logs --file /tmp/TestFunctionalserialLogsFileCmd3477338905/001/logs.txt: (1.26223245s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.89s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-266785 apply -f testdata/invalidsvc.yaml
E1119 02:03:51.238139   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:51.244663   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:51.256033   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:51.277535   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:51.319083   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:51.400573   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:51.562140   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:51.883825   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:03:52.525863   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-266785
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-266785: exit status 115 (348.792809ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30824 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-266785 delete -f testdata/invalidsvc.yaml
E1119 02:03:53.807475   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/InvalidService (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-266785 config get cpus: exit status 14 (70.419256ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-266785 config get cpus: exit status 14 (66.091754ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-266785 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-266785 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 57638: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.29s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-266785 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-266785 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (205.884367ms)

                                                
                                                
-- stdout --
	* [functional-266785] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:04:04.553268   56544 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:04:04.553414   56544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:04:04.553426   56544 out.go:374] Setting ErrFile to fd 2...
	I1119 02:04:04.553433   56544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:04:04.553784   56544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:04:04.554399   56544 out.go:368] Setting JSON to false
	I1119 02:04:04.555756   56544 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2785,"bootTime":1763515060,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:04:04.555872   56544 start.go:143] virtualization: kvm guest
	I1119 02:04:04.558838   56544 out.go:179] * [functional-266785] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:04:04.560413   56544 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:04:04.560413   56544 notify.go:221] Checking for updates...
	I1119 02:04:04.563600   56544 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:04:04.565014   56544 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:04:04.566318   56544 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	I1119 02:04:04.567909   56544 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:04:04.569230   56544 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:04:04.571286   56544 config.go:182] Loaded profile config "functional-266785": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:04:04.572146   56544 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:04:04.600064   56544 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:04:04.600165   56544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:04:04.671385   56544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-19 02:04:04.660310778 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:04:04.671494   56544 docker.go:319] overlay module found
	I1119 02:04:04.674949   56544 out.go:179] * Using the docker driver based on existing profile
	I1119 02:04:04.676466   56544 start.go:309] selected driver: docker
	I1119 02:04:04.676485   56544 start.go:930] validating driver "docker" against &{Name:functional-266785 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-266785 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:04:04.676608   56544 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:04:04.678494   56544 out.go:203] 
	W1119 02:04:04.679688   56544 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1119 02:04:04.680955   56544 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-266785 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-266785 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-266785 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (233.139189ms)

                                                
                                                
-- stdout --
	* [functional-266785] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:04:04.325838   56351 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:04:04.325976   56351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:04:04.326008   56351 out.go:374] Setting ErrFile to fd 2...
	I1119 02:04:04.326016   56351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:04:04.326437   56351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:04:04.327049   56351 out.go:368] Setting JSON to false
	I1119 02:04:04.328391   56351 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2784,"bootTime":1763515060,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:04:04.328511   56351 start.go:143] virtualization: kvm guest
	I1119 02:04:04.330550   56351 out.go:179] * [functional-266785] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1119 02:04:04.332830   56351 notify.go:221] Checking for updates...
	I1119 02:04:04.332866   56351 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:04:04.334639   56351 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:04:04.336625   56351 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:04:04.338238   56351 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	I1119 02:04:04.339642   56351 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:04:04.341993   56351 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:04:04.343919   56351 config.go:182] Loaded profile config "functional-266785": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:04:04.344655   56351 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:04:04.377260   56351 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:04:04.377351   56351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:04:04.459855   56351 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-19 02:04:04.445124484 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:04:04.460022   56351 docker.go:319] overlay module found
	I1119 02:04:04.463469   56351 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1119 02:04:04.466045   56351 start.go:309] selected driver: docker
	I1119 02:04:04.466067   56351 start.go:930] validating driver "docker" against &{Name:functional-266785 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-266785 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:04:04.466180   56351 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:04:04.470509   56351 out.go:203] 
	W1119 02:04:04.472088   56351 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1119 02:04:04.473478   56351 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-266785 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-266785 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-9cxdc" [26a1de94-0261-4829-b2b6-5e1d3ceb0221] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-9cxdc" [26a1de94-0261-4829-b2b6-5e1d3ceb0221] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004300956s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32638
functional_test.go:1680: http://192.168.49.2:32638: success! body:
Request served by hello-node-connect-7d85dfc575-9cxdc

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32638
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [a0b9a265-be3e-4db8-a0c1-3d70e7fb0ab9] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003907727s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-266785 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-266785 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-266785 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-266785 apply -f testdata/storage-provisioner/pod.yaml
I1119 02:04:00.794200   14657 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [9f0a67f2-cb4f-4c44-86c7-d97396e02824] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [9f0a67f2-cb4f-4c44-86c7-d97396e02824] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004318028s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-266785 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-266785 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-266785 delete -f testdata/storage-provisioner/pod.yaml: (1.161151752s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-266785 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [9c070485-aa12-4627-9a45-17a6b5688a76] Pending
helpers_test.go:352: "sp-pod" [9c070485-aa12-4627-9a45-17a6b5688a76] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [9c070485-aa12-4627-9a45-17a6b5688a76] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.183603962s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-266785 exec sp-pod -- ls /tmp/mount
I1119 02:04:24.398222   14657 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.10s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh -n functional-266785 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 cp functional-266785:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3566789424/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh -n functional-266785 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh -n functional-266785 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-266785 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
2025/11/19 02:04:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "mysql-5bb876957f-dmntl" [6057c273-b1e0-42fb-9b96-9b4dd84629cf] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-dmntl" [6057c273-b1e0-42fb-9b96-9b4dd84629cf] Running
E1119 02:04:32.215499   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.00384679s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-266785 exec mysql-5bb876957f-dmntl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-266785 exec mysql-5bb876957f-dmntl -- mysql -ppassword -e "show databases;": exit status 1 (108.744015ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1119 02:04:33.713416   14657 retry.go:31] will retry after 636.181855ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-266785 exec mysql-5bb876957f-dmntl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-266785 exec mysql-5bb876957f-dmntl -- mysql -ppassword -e "show databases;": exit status 1 (118.536516ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1119 02:04:34.468872   14657 retry.go:31] will retry after 941.489153ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-266785 exec mysql-5bb876957f-dmntl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.10s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/14657/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "sudo cat /etc/test/nested/copy/14657/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/14657.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "sudo cat /etc/ssl/certs/14657.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/14657.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "sudo cat /usr/share/ca-certificates/14657.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/146572.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "sudo cat /etc/ssl/certs/146572.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/146572.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "sudo cat /usr/share/ca-certificates/146572.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-266785 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-266785 ssh "sudo systemctl is-active docker": exit status 1 (317.22525ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-266785 ssh "sudo systemctl is-active crio": exit status 1 (286.317146ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-266785 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-266785
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-266785
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-266785 image ls --format short --alsologtostderr:
I1119 02:04:20.107957   62217 out.go:360] Setting OutFile to fd 1 ...
I1119 02:04:20.108258   62217 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:20.108269   62217 out.go:374] Setting ErrFile to fd 2...
I1119 02:04:20.108273   62217 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:20.108491   62217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
I1119 02:04:20.109235   62217 config.go:182] Loaded profile config "functional-266785": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:04:20.109397   62217 config.go:182] Loaded profile config "functional-266785": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:04:20.109812   62217 cli_runner.go:164] Run: docker container inspect functional-266785 --format={{.State.Status}}
I1119 02:04:20.130471   62217 ssh_runner.go:195] Run: systemctl --version
I1119 02:04:20.130532   62217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-266785
I1119 02:04:20.153940   62217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/functional-266785/id_rsa Username:docker}
I1119 02:04:20.250501   62217 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-266785 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ docker.io/kicbase/echo-server               │ functional-266785  │ sha256:9056ab │ 2.37MB │
│ docker.io/library/nginx                     │ alpine             │ sha256:d4918c │ 22.6MB │
│ docker.io/library/nginx                     │ latest             │ sha256:60adc2 │ 59.8MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ docker.io/library/minikube-local-cache-test │ functional-266785  │ sha256:91a0df │ 993B   │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-266785 image ls --format table --alsologtostderr:
I1119 02:04:24.647394   62705 out.go:360] Setting OutFile to fd 1 ...
I1119 02:04:24.647490   62705 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:24.647494   62705 out.go:374] Setting ErrFile to fd 2...
I1119 02:04:24.647498   62705 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:24.647778   62705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
I1119 02:04:24.648536   62705 config.go:182] Loaded profile config "functional-266785": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:04:24.648703   62705 config.go:182] Loaded profile config "functional-266785": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:04:24.649154   62705 cli_runner.go:164] Run: docker container inspect functional-266785 --format={{.State.Status}}
I1119 02:04:24.673897   62705 ssh_runner.go:195] Run: systemctl --version
I1119 02:04:24.673963   62705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-266785
I1119 02:04:24.695894   62705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/functional-266785/id_rsa Username:docker}
I1119 02:04:24.798959   62705 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-266785 image ls --format json --alsologtostderr:
[{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"59772801"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k
8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-266785"],"size":"2372971"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a801
25d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22631814"},{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"17385568"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"si
ze":"27061991"},{"id":"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"22820214"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:91a0df4b219957b64d8cf7d8ae571fc3d10ff0f99c0bdd18f56d3518b4c7cd51","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-266785"],"size":"993"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-
provisioner:v5"],"size":"9058936"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-266785 image ls --format json --alsologtostderr:
I1119 02:04:24.580584   62694 out.go:360] Setting OutFile to fd 1 ...
I1119 02:04:24.580706   62694 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:24.580716   62694 out.go:374] Setting ErrFile to fd 2...
I1119 02:04:24.580722   62694 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:24.580989   62694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
I1119 02:04:24.581674   62694 config.go:182] Loaded profile config "functional-266785": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:04:24.582052   62694 config.go:182] Loaded profile config "functional-266785": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:04:24.582627   62694 cli_runner.go:164] Run: docker container inspect functional-266785 --format={{.State.Status}}
I1119 02:04:24.606853   62694 ssh_runner.go:195] Run: systemctl --version
I1119 02:04:24.606906   62694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-266785
I1119 02:04:24.630174   62694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/functional-266785/id_rsa Username:docker}
I1119 02:04:24.736221   62694 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-266785 image ls --format yaml --alsologtostderr:
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-266785
size: "2372971"
- id: sha256:91a0df4b219957b64d8cf7d8ae571fc3d10ff0f99c0bdd18f56d3518b4c7cd51
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-266785
size: "993"
- id: sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "22631814"
- id: sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "59772801"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-266785 image ls --format yaml --alsologtostderr:
I1119 02:04:20.343507   62271 out.go:360] Setting OutFile to fd 1 ...
I1119 02:04:20.343603   62271 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:20.343612   62271 out.go:374] Setting ErrFile to fd 2...
I1119 02:04:20.343616   62271 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:20.343822   62271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
I1119 02:04:20.344356   62271 config.go:182] Loaded profile config "functional-266785": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:04:20.344459   62271 config.go:182] Loaded profile config "functional-266785": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:04:20.344823   62271 cli_runner.go:164] Run: docker container inspect functional-266785 --format={{.State.Status}}
I1119 02:04:20.364248   62271 ssh_runner.go:195] Run: systemctl --version
I1119 02:04:20.364311   62271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-266785
I1119 02:04:20.383267   62271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/functional-266785/id_rsa Username:docker}
I1119 02:04:20.480234   62271 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-266785 ssh pgrep buildkitd: exit status 1 (328.665918ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image build -t localhost/my-image:functional-266785 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-266785 image build -t localhost/my-image:functional-266785 testdata/build --alsologtostderr: (4.358868334s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-266785 image build -t localhost/my-image:functional-266785 testdata/build --alsologtostderr:
I1119 02:04:20.915796   62427 out.go:360] Setting OutFile to fd 1 ...
I1119 02:04:20.916075   62427 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:20.916084   62427 out.go:374] Setting ErrFile to fd 2...
I1119 02:04:20.916089   62427 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:04:20.916314   62427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
I1119 02:04:20.917031   62427 config.go:182] Loaded profile config "functional-266785": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:04:20.918077   62427 config.go:182] Loaded profile config "functional-266785": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:04:20.918688   62427 cli_runner.go:164] Run: docker container inspect functional-266785 --format={{.State.Status}}
I1119 02:04:20.942599   62427 ssh_runner.go:195] Run: systemctl --version
I1119 02:04:20.942661   62427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-266785
I1119 02:04:20.965908   62427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/functional-266785/id_rsa Username:docker}
I1119 02:04:21.066803   62427 build_images.go:162] Building image from path: /tmp/build.2749398168.tar
I1119 02:04:21.066872   62427 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1119 02:04:21.078391   62427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2749398168.tar
I1119 02:04:21.083088   62427 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2749398168.tar: stat -c "%s %y" /var/lib/minikube/build/build.2749398168.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2749398168.tar': No such file or directory
I1119 02:04:21.083122   62427 ssh_runner.go:362] scp /tmp/build.2749398168.tar --> /var/lib/minikube/build/build.2749398168.tar (3072 bytes)
I1119 02:04:21.104063   62427 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2749398168
I1119 02:04:21.114465   62427 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2749398168 -xf /var/lib/minikube/build/build.2749398168.tar
I1119 02:04:21.124291   62427 containerd.go:394] Building image: /var/lib/minikube/build/build.2749398168
I1119 02:04:21.124378   62427 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2749398168 --local dockerfile=/var/lib/minikube/build/build.2749398168 --output type=image,name=localhost/my-image:functional-266785
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.0s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:255c012b97555e94f7e5509243a6a1b254a0fa259c5a08374ff8b93f26d017e1 done
#8 exporting config sha256:f75fd9ab53187e4fe7f350d25a287cb877767f3f1ca23683e513973ed02db430 done
#8 naming to localhost/my-image:functional-266785
#8 naming to localhost/my-image:functional-266785 done
#8 DONE 0.1s
I1119 02:04:25.178821   62427 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2749398168 --local dockerfile=/var/lib/minikube/build/build.2749398168 --output type=image,name=localhost/my-image:functional-266785: (4.054414577s)
I1119 02:04:25.178891   62427 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2749398168
I1119 02:04:25.189270   62427 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2749398168.tar
I1119 02:04:25.198785   62427 build_images.go:218] Built localhost/my-image:functional-266785 from /tmp/build.2749398168.tar
I1119 02:04:25.198817   62427 build_images.go:134] succeeded building to: functional-266785
I1119 02:04:25.198823   62427 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.802648932s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-266785
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-266785 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-266785 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-vfk57" [2a2aadb9-8e93-46f3-9039-f2284aacdf76] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-vfk57" [2a2aadb9-8e93-46f3-9039-f2284aacdf76] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004518856s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image load --daemon kicbase/echo-server:functional-266785 --alsologtostderr
E1119 02:03:56.369401   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image load --daemon kicbase/echo-server:functional-266785 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-266785
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image load --daemon kicbase/echo-server:functional-266785 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image save kicbase/echo-server:functional-266785 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image rm kicbase/echo-server:functional-266785 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-266785
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 image save --daemon kicbase/echo-server:functional-266785 --alsologtostderr
E1119 02:04:01.491275   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-266785
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "345.09357ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.628115ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "341.310546ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "61.340119ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-266785 /tmp/TestFunctionalparallelMountCmdany-port3131594898/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763517842740661885" to /tmp/TestFunctionalparallelMountCmdany-port3131594898/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763517842740661885" to /tmp/TestFunctionalparallelMountCmdany-port3131594898/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763517842740661885" to /tmp/TestFunctionalparallelMountCmdany-port3131594898/001/test-1763517842740661885
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-266785 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (313.217902ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 02:04:03.054145   14657 retry.go:31] will retry after 686.229836ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 19 02:04 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 19 02:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 19 02:04 test-1763517842740661885
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh cat /mount-9p/test-1763517842740661885
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-266785 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [9c517a99-4570-4538-8d16-cf0f75fb0b3f] Pending
helpers_test.go:352: "busybox-mount" [9c517a99-4570-4538-8d16-cf0f75fb0b3f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [9c517a99-4570-4538-8d16-cf0f75fb0b3f] Running
helpers_test.go:352: "busybox-mount" [9c517a99-4570-4538-8d16-cf0f75fb0b3f] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [9c517a99-4570-4538-8d16-cf0f75fb0b3f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004241207s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-266785 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "sudo umount -f /mount-9p"
E1119 02:04:11.733503   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-266785 /tmp/TestFunctionalparallelMountCmdany-port3131594898/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 service list -o json
functional_test.go:1504: Took "403.517997ms" to run "out/minikube-linux-amd64 -p functional-266785 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31880
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31880
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-266785 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-266785 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-266785 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 58356: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-266785 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-266785 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-266785 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [312c1e19-88a4-42da-895f-254114e2e348] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [312c1e19-88a4-42da-895f-254114e2e348] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.04034257s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-266785 /tmp/TestFunctionalparallelMountCmdspecific-port3265969714/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-266785 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (331.180856ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 02:04:12.429062   14657 retry.go:31] will retry after 709.631622ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "findmnt -T /mount-9p | grep 9p"
I1119 02:04:13.210883   14657 detect.go:223] nested VM detected
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-266785 /tmp/TestFunctionalparallelMountCmdspecific-port3265969714/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-266785 ssh "sudo umount -f /mount-9p": exit status 1 (312.509098ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-266785 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-266785 /tmp/TestFunctionalparallelMountCmdspecific-port3265969714/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-266785 /tmp/TestFunctionalparallelMountCmdVerifyCleanup492836997/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-266785 /tmp/TestFunctionalparallelMountCmdVerifyCleanup492836997/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-266785 /tmp/TestFunctionalparallelMountCmdVerifyCleanup492836997/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-266785 ssh "findmnt -T" /mount1: exit status 1 (353.640632ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 02:04:14.598373   14657 retry.go:31] will retry after 589.32697ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-266785 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-266785 /tmp/TestFunctionalparallelMountCmdVerifyCleanup492836997/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-266785 /tmp/TestFunctionalparallelMountCmdVerifyCleanup492836997/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-266785 /tmp/TestFunctionalparallelMountCmdVerifyCleanup492836997/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-266785 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-266785 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.32.109 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-266785 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-266785
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-266785
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-266785
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (108.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1119 02:05:13.177256   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-917673 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m47.612265678s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (108.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-917673 kubectl -- rollout status deployment/busybox: (3.386095151s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- exec busybox-7b57f96db7-8zvsk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- exec busybox-7b57f96db7-crnrf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- exec busybox-7b57f96db7-dwkn2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- exec busybox-7b57f96db7-8zvsk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- exec busybox-7b57f96db7-crnrf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- exec busybox-7b57f96db7-dwkn2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- exec busybox-7b57f96db7-8zvsk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- exec busybox-7b57f96db7-crnrf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- exec busybox-7b57f96db7-dwkn2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- exec busybox-7b57f96db7-8zvsk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- exec busybox-7b57f96db7-8zvsk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- exec busybox-7b57f96db7-crnrf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- exec busybox-7b57f96db7-crnrf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- exec busybox-7b57f96db7-dwkn2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 kubectl -- exec busybox-7b57f96db7-dwkn2 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 node add --alsologtostderr -v 5
E1119 02:06:35.099340   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-917673 node add --alsologtostderr -v 5: (23.377644275s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-917673 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp testdata/cp-test.txt ha-917673:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp ha-917673:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3205658892/001/cp-test_ha-917673.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp ha-917673:/home/docker/cp-test.txt ha-917673-m02:/home/docker/cp-test_ha-917673_ha-917673-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m02 "sudo cat /home/docker/cp-test_ha-917673_ha-917673-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp ha-917673:/home/docker/cp-test.txt ha-917673-m03:/home/docker/cp-test_ha-917673_ha-917673-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m03 "sudo cat /home/docker/cp-test_ha-917673_ha-917673-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp ha-917673:/home/docker/cp-test.txt ha-917673-m04:/home/docker/cp-test_ha-917673_ha-917673-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m04 "sudo cat /home/docker/cp-test_ha-917673_ha-917673-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp testdata/cp-test.txt ha-917673-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp ha-917673-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3205658892/001/cp-test_ha-917673-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp ha-917673-m02:/home/docker/cp-test.txt ha-917673:/home/docker/cp-test_ha-917673-m02_ha-917673.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673 "sudo cat /home/docker/cp-test_ha-917673-m02_ha-917673.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp ha-917673-m02:/home/docker/cp-test.txt ha-917673-m03:/home/docker/cp-test_ha-917673-m02_ha-917673-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m03 "sudo cat /home/docker/cp-test_ha-917673-m02_ha-917673-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp ha-917673-m02:/home/docker/cp-test.txt ha-917673-m04:/home/docker/cp-test_ha-917673-m02_ha-917673-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m04 "sudo cat /home/docker/cp-test_ha-917673-m02_ha-917673-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp testdata/cp-test.txt ha-917673-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp ha-917673-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3205658892/001/cp-test_ha-917673-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp ha-917673-m03:/home/docker/cp-test.txt ha-917673:/home/docker/cp-test_ha-917673-m03_ha-917673.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673 "sudo cat /home/docker/cp-test_ha-917673-m03_ha-917673.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp ha-917673-m03:/home/docker/cp-test.txt ha-917673-m02:/home/docker/cp-test_ha-917673-m03_ha-917673-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m02 "sudo cat /home/docker/cp-test_ha-917673-m03_ha-917673-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp ha-917673-m03:/home/docker/cp-test.txt ha-917673-m04:/home/docker/cp-test_ha-917673-m03_ha-917673-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m04 "sudo cat /home/docker/cp-test_ha-917673-m03_ha-917673-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp testdata/cp-test.txt ha-917673-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp ha-917673-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3205658892/001/cp-test_ha-917673-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp ha-917673-m04:/home/docker/cp-test.txt ha-917673:/home/docker/cp-test_ha-917673-m04_ha-917673.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673 "sudo cat /home/docker/cp-test_ha-917673-m04_ha-917673.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp ha-917673-m04:/home/docker/cp-test.txt ha-917673-m02:/home/docker/cp-test_ha-917673-m04_ha-917673-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m02 "sudo cat /home/docker/cp-test_ha-917673-m04_ha-917673-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 cp ha-917673-m04:/home/docker/cp-test.txt ha-917673-m03:/home/docker/cp-test_ha-917673-m04_ha-917673-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 ssh -n ha-917673-m03 "sudo cat /home/docker/cp-test_ha-917673-m04_ha-917673-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-917673 node stop m02 --alsologtostderr -v 5: (12.030074967s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-917673 status --alsologtostderr -v 5: exit status 7 (698.977976ms)

                                                
                                                
-- stdout --
	ha-917673
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-917673-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-917673-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-917673-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:07:28.718119   84066 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:07:28.718236   84066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:07:28.718254   84066 out.go:374] Setting ErrFile to fd 2...
	I1119 02:07:28.718258   84066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:07:28.718463   84066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:07:28.718632   84066 out.go:368] Setting JSON to false
	I1119 02:07:28.718660   84066 mustload.go:66] Loading cluster: ha-917673
	I1119 02:07:28.718795   84066 notify.go:221] Checking for updates...
	I1119 02:07:28.719123   84066 config.go:182] Loaded profile config "ha-917673": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:07:28.719138   84066 status.go:174] checking status of ha-917673 ...
	I1119 02:07:28.719619   84066 cli_runner.go:164] Run: docker container inspect ha-917673 --format={{.State.Status}}
	I1119 02:07:28.740040   84066 status.go:371] ha-917673 host status = "Running" (err=<nil>)
	I1119 02:07:28.740077   84066 host.go:66] Checking if "ha-917673" exists ...
	I1119 02:07:28.740432   84066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-917673
	I1119 02:07:28.759152   84066 host.go:66] Checking if "ha-917673" exists ...
	I1119 02:07:28.759432   84066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:07:28.759482   84066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-917673
	I1119 02:07:28.778476   84066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/ha-917673/id_rsa Username:docker}
	I1119 02:07:28.872887   84066 ssh_runner.go:195] Run: systemctl --version
	I1119 02:07:28.879539   84066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:07:28.892845   84066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:07:28.949338   84066 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:07:28.939502234 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:07:28.949909   84066 kubeconfig.go:125] found "ha-917673" server: "https://192.168.49.254:8443"
	I1119 02:07:28.949935   84066 api_server.go:166] Checking apiserver status ...
	I1119 02:07:28.949983   84066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:07:28.962218   84066 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1384/cgroup
	W1119 02:07:28.970857   84066 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1384/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:07:28.970900   84066 ssh_runner.go:195] Run: ls
	I1119 02:07:28.974745   84066 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1119 02:07:28.979639   84066 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1119 02:07:28.979660   84066 status.go:463] ha-917673 apiserver status = Running (err=<nil>)
	I1119 02:07:28.979669   84066 status.go:176] ha-917673 status: &{Name:ha-917673 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:07:28.979683   84066 status.go:174] checking status of ha-917673-m02 ...
	I1119 02:07:28.979904   84066 cli_runner.go:164] Run: docker container inspect ha-917673-m02 --format={{.State.Status}}
	I1119 02:07:28.999078   84066 status.go:371] ha-917673-m02 host status = "Stopped" (err=<nil>)
	I1119 02:07:28.999097   84066 status.go:384] host is not running, skipping remaining checks
	I1119 02:07:28.999102   84066 status.go:176] ha-917673-m02 status: &{Name:ha-917673-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:07:28.999121   84066 status.go:174] checking status of ha-917673-m03 ...
	I1119 02:07:28.999357   84066 cli_runner.go:164] Run: docker container inspect ha-917673-m03 --format={{.State.Status}}
	I1119 02:07:29.018785   84066 status.go:371] ha-917673-m03 host status = "Running" (err=<nil>)
	I1119 02:07:29.018811   84066 host.go:66] Checking if "ha-917673-m03" exists ...
	I1119 02:07:29.019123   84066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-917673-m03
	I1119 02:07:29.037956   84066 host.go:66] Checking if "ha-917673-m03" exists ...
	I1119 02:07:29.038303   84066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:07:29.038356   84066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-917673-m03
	I1119 02:07:29.056595   84066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32799 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/ha-917673-m03/id_rsa Username:docker}
	I1119 02:07:29.150849   84066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:07:29.165067   84066 kubeconfig.go:125] found "ha-917673" server: "https://192.168.49.254:8443"
	I1119 02:07:29.165092   84066 api_server.go:166] Checking apiserver status ...
	I1119 02:07:29.165134   84066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:07:29.176637   84066 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1274/cgroup
	W1119 02:07:29.185131   84066 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1274/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:07:29.185202   84066 ssh_runner.go:195] Run: ls
	I1119 02:07:29.188998   84066 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1119 02:07:29.193578   84066 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1119 02:07:29.193603   84066 status.go:463] ha-917673-m03 apiserver status = Running (err=<nil>)
	I1119 02:07:29.193613   84066 status.go:176] ha-917673-m03 status: &{Name:ha-917673-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:07:29.193631   84066 status.go:174] checking status of ha-917673-m04 ...
	I1119 02:07:29.193947   84066 cli_runner.go:164] Run: docker container inspect ha-917673-m04 --format={{.State.Status}}
	I1119 02:07:29.213218   84066 status.go:371] ha-917673-m04 host status = "Running" (err=<nil>)
	I1119 02:07:29.213239   84066 host.go:66] Checking if "ha-917673-m04" exists ...
	I1119 02:07:29.213495   84066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-917673-m04
	I1119 02:07:29.231249   84066 host.go:66] Checking if "ha-917673-m04" exists ...
	I1119 02:07:29.231529   84066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:07:29.231570   84066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-917673-m04
	I1119 02:07:29.249946   84066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32804 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/ha-917673-m04/id_rsa Username:docker}
	I1119 02:07:29.343577   84066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:07:29.355881   84066 status.go:176] ha-917673-m04 status: &{Name:ha-917673-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-917673 node start m02 --alsologtostderr -v 5: (8.115320746s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-917673 stop --alsologtostderr -v 5: (37.276609212s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 start --wait true --alsologtostderr -v 5
E1119 02:08:51.238090   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:08:54.421659   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:08:54.428688   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:08:54.440188   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:08:54.461720   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:08:54.503161   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:08:54.584663   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:08:54.746227   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:08:55.067795   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:08:55.709608   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:08:56.991180   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:08:59.552542   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:09:04.674791   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:09:14.916192   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-917673 start --wait true --alsologtostderr -v 5: (59.289885032s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 node delete m03 --alsologtostderr -v 5
E1119 02:09:18.941537   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-917673 node delete m03 --alsologtostderr -v 5: (8.609846158s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 stop --alsologtostderr -v 5
E1119 02:09:35.397595   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-917673 stop --alsologtostderr -v 5: (36.144373033s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-917673 status --alsologtostderr -v 5: exit status 7 (114.972686ms)

                                                
                                                
-- stdout --
	ha-917673
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-917673-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-917673-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:10:03.089976  100480 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:10:03.090117  100480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:10:03.090129  100480 out.go:374] Setting ErrFile to fd 2...
	I1119 02:10:03.090135  100480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:10:03.090362  100480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:10:03.090525  100480 out.go:368] Setting JSON to false
	I1119 02:10:03.090554  100480 mustload.go:66] Loading cluster: ha-917673
	I1119 02:10:03.090647  100480 notify.go:221] Checking for updates...
	I1119 02:10:03.090921  100480 config.go:182] Loaded profile config "ha-917673": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:10:03.090938  100480 status.go:174] checking status of ha-917673 ...
	I1119 02:10:03.091339  100480 cli_runner.go:164] Run: docker container inspect ha-917673 --format={{.State.Status}}
	I1119 02:10:03.110874  100480 status.go:371] ha-917673 host status = "Stopped" (err=<nil>)
	I1119 02:10:03.110896  100480 status.go:384] host is not running, skipping remaining checks
	I1119 02:10:03.110909  100480 status.go:176] ha-917673 status: &{Name:ha-917673 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:10:03.110937  100480 status.go:174] checking status of ha-917673-m02 ...
	I1119 02:10:03.111259  100480 cli_runner.go:164] Run: docker container inspect ha-917673-m02 --format={{.State.Status}}
	I1119 02:10:03.129883  100480 status.go:371] ha-917673-m02 host status = "Stopped" (err=<nil>)
	I1119 02:10:03.129910  100480 status.go:384] host is not running, skipping remaining checks
	I1119 02:10:03.129925  100480 status.go:176] ha-917673-m02 status: &{Name:ha-917673-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:10:03.129948  100480 status.go:174] checking status of ha-917673-m04 ...
	I1119 02:10:03.130213  100480 cli_runner.go:164] Run: docker container inspect ha-917673-m04 --format={{.State.Status}}
	I1119 02:10:03.147452  100480 status.go:371] ha-917673-m04 host status = "Stopped" (err=<nil>)
	I1119 02:10:03.147472  100480 status.go:384] host is not running, skipping remaining checks
	I1119 02:10:03.147479  100480 status.go:176] ha-917673-m04 status: &{Name:ha-917673-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (54.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1119 02:10:16.359690   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-917673 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (53.488487179s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (54.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (67.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 node add --control-plane --alsologtostderr -v 5
E1119 02:11:38.282102   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-917673 node add --control-plane --alsologtostderr -v 5: (1m6.157232558s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-917673 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (67.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-393505 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-393505 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (39.405192848s)
--- PASS: TestJSONOutput/start/Command (39.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-393505 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-393505 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-393505 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-393505 --output=json --user=testUser: (5.884385786s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-089500 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-089500 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (79.605709ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8d9bb7f5-3f7b-4a00-8f72-0651fe887c8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-089500] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fdec3075-71e9-478f-8fad-a4aeea846a8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21924"}}
	{"specversion":"1.0","id":"c6736b55-6210-4e84-9a40-dd36a3b4849f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1fb70528-f525-4cc0-bcf0-1df15cc4dad4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig"}}
	{"specversion":"1.0","id":"c9dd7921-cd25-4ed6-aea0-484b65fe43a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube"}}
	{"specversion":"1.0","id":"e33ff6f8-aff0-446c-98d0-91e5f9a05377","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"15365f72-e1a2-4f7e-a896-0da2445473f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"853a2572-c86d-4b58-8dde-97ef0f554317","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-089500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-089500
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.05s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-264302 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-264302 --network=: (32.864309053s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-264302" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-264302
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-264302: (2.169127788s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.05s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-732326 --network=bridge
E1119 02:13:51.238151   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:13:54.423976   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-732326 --network=bridge: (21.218149911s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-732326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-732326
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-732326: (2.022472007s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.26s)

                                                
                                    
x
+
TestKicExistingNetwork (24.22s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1119 02:14:04.666911   14657 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1119 02:14:04.684636   14657 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1119 02:14:04.684715   14657 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1119 02:14:04.684737   14657 cli_runner.go:164] Run: docker network inspect existing-network
W1119 02:14:04.701968   14657 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1119 02:14:04.701996   14657 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1119 02:14:04.702033   14657 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1119 02:14:04.702152   14657 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1119 02:14:04.720695   14657 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ed39016f2aa9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:16:a0:62:5a:51} reservation:<nil>}
I1119 02:14:04.721216   14657 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002d7d00}
I1119 02:14:04.721247   14657 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1119 02:14:04.721294   14657 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1119 02:14:04.772125   14657 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-153872 --network=existing-network
E1119 02:14:22.123484   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-153872 --network=existing-network: (22.027305059s)
helpers_test.go:175: Cleaning up "existing-network-153872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-153872
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-153872: (2.052155556s)
I1119 02:14:28.870420   14657 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.22s)

                                                
                                    
x
+
TestKicCustomSubnet (24.35s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-877106 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-877106 --subnet=192.168.60.0/24: (22.167091485s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-877106 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-877106" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-877106
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-877106: (2.159671328s)
--- PASS: TestKicCustomSubnet (24.35s)

                                                
                                    
x
+
TestKicStaticIP (24.11s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-005186 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-005186 --static-ip=192.168.200.200: (21.812671097s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-005186 ip
helpers_test.go:175: Cleaning up "static-ip-005186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-005186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-005186: (2.144653179s)
--- PASS: TestKicStaticIP (24.11s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (50.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-237233 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-237233 --driver=docker  --container-runtime=containerd: (24.668898653s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-240185 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-240185 --driver=docker  --container-runtime=containerd: (19.578510946s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-237233
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-240185
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-240185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-240185
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-240185: (2.337188437s)
helpers_test.go:175: Cleaning up "first-237233" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-237233
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-237233: (2.332358515s)
--- PASS: TestMinikubeProfile (50.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-526678 --memory=3072 --mount-string /tmp/TestMountStartserial1617982785/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-526678 --memory=3072 --mount-string /tmp/TestMountStartserial1617982785/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.567545674s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-526678 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-537959 --memory=3072 --mount-string /tmp/TestMountStartserial1617982785/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-537959 --memory=3072 --mount-string /tmp/TestMountStartserial1617982785/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.030169946s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-537959 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-526678 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-526678 --alsologtostderr -v=5: (1.679426729s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-537959 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-537959
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-537959: (1.274631944s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.67s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-537959
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-537959: (6.670111948s)
--- PASS: TestMountStart/serial/RestartStopped (7.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-537959 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-225207 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-225207 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m2.414494698s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225207 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225207 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-225207 -- rollout status deployment/busybox: (3.052415905s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225207 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225207 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225207 -- exec busybox-7b57f96db7-2jrvd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225207 -- exec busybox-7b57f96db7-66scd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225207 -- exec busybox-7b57f96db7-2jrvd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225207 -- exec busybox-7b57f96db7-66scd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225207 -- exec busybox-7b57f96db7-2jrvd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225207 -- exec busybox-7b57f96db7-66scd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.54s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225207 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225207 -- exec busybox-7b57f96db7-2jrvd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225207 -- exec busybox-7b57f96db7-2jrvd -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225207 -- exec busybox-7b57f96db7-66scd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225207 -- exec busybox-7b57f96db7-66scd -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-225207 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-225207 -v=5 --alsologtostderr: (52.465008631s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.11s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-225207 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 cp testdata/cp-test.txt multinode-225207:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 cp multinode-225207:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1612236824/001/cp-test_multinode-225207.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 cp multinode-225207:/home/docker/cp-test.txt multinode-225207-m02:/home/docker/cp-test_multinode-225207_multinode-225207-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207-m02 "sudo cat /home/docker/cp-test_multinode-225207_multinode-225207-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 cp multinode-225207:/home/docker/cp-test.txt multinode-225207-m03:/home/docker/cp-test_multinode-225207_multinode-225207-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207-m03 "sudo cat /home/docker/cp-test_multinode-225207_multinode-225207-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 cp testdata/cp-test.txt multinode-225207-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 cp multinode-225207-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1612236824/001/cp-test_multinode-225207-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 cp multinode-225207-m02:/home/docker/cp-test.txt multinode-225207:/home/docker/cp-test_multinode-225207-m02_multinode-225207.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207 "sudo cat /home/docker/cp-test_multinode-225207-m02_multinode-225207.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 cp multinode-225207-m02:/home/docker/cp-test.txt multinode-225207-m03:/home/docker/cp-test_multinode-225207-m02_multinode-225207-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207-m03 "sudo cat /home/docker/cp-test_multinode-225207-m02_multinode-225207-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 cp testdata/cp-test.txt multinode-225207-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 cp multinode-225207-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1612236824/001/cp-test_multinode-225207-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 cp multinode-225207-m03:/home/docker/cp-test.txt multinode-225207:/home/docker/cp-test_multinode-225207-m03_multinode-225207.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207 "sudo cat /home/docker/cp-test_multinode-225207-m03_multinode-225207.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 cp multinode-225207-m03:/home/docker/cp-test.txt multinode-225207-m02:/home/docker/cp-test_multinode-225207-m03_multinode-225207-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 ssh -n multinode-225207-m02 "sudo cat /home/docker/cp-test_multinode-225207-m03_multinode-225207-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-225207 node stop m03: (1.276498998s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-225207 status: exit status 7 (494.434339ms)

                                                
                                                
-- stdout --
	multinode-225207
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-225207-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-225207-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-225207 status --alsologtostderr: exit status 7 (502.70319ms)

                                                
                                                
-- stdout --
	multinode-225207
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-225207-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-225207-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:18:44.556332  163011 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:18:44.556600  163011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:18:44.556609  163011 out.go:374] Setting ErrFile to fd 2...
	I1119 02:18:44.556613  163011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:18:44.556900  163011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:18:44.557051  163011 out.go:368] Setting JSON to false
	I1119 02:18:44.557080  163011 mustload.go:66] Loading cluster: multinode-225207
	I1119 02:18:44.557155  163011 notify.go:221] Checking for updates...
	I1119 02:18:44.557538  163011 config.go:182] Loaded profile config "multinode-225207": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:18:44.557555  163011 status.go:174] checking status of multinode-225207 ...
	I1119 02:18:44.558083  163011 cli_runner.go:164] Run: docker container inspect multinode-225207 --format={{.State.Status}}
	I1119 02:18:44.577834  163011 status.go:371] multinode-225207 host status = "Running" (err=<nil>)
	I1119 02:18:44.577864  163011 host.go:66] Checking if "multinode-225207" exists ...
	I1119 02:18:44.578128  163011 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-225207
	I1119 02:18:44.596895  163011 host.go:66] Checking if "multinode-225207" exists ...
	I1119 02:18:44.597208  163011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:18:44.597276  163011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-225207
	I1119 02:18:44.615667  163011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/multinode-225207/id_rsa Username:docker}
	I1119 02:18:44.708648  163011 ssh_runner.go:195] Run: systemctl --version
	I1119 02:18:44.715189  163011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:18:44.728646  163011 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:18:44.788761  163011 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-19 02:18:44.778814717 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:18:44.789259  163011 kubeconfig.go:125] found "multinode-225207" server: "https://192.168.67.2:8443"
	I1119 02:18:44.789285  163011 api_server.go:166] Checking apiserver status ...
	I1119 02:18:44.789321  163011 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:18:44.801685  163011 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1344/cgroup
	W1119 02:18:44.809883  163011 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1344/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:18:44.809960  163011 ssh_runner.go:195] Run: ls
	I1119 02:18:44.813707  163011 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1119 02:18:44.817657  163011 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1119 02:18:44.817679  163011 status.go:463] multinode-225207 apiserver status = Running (err=<nil>)
	I1119 02:18:44.817689  163011 status.go:176] multinode-225207 status: &{Name:multinode-225207 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:18:44.817703  163011 status.go:174] checking status of multinode-225207-m02 ...
	I1119 02:18:44.817933  163011 cli_runner.go:164] Run: docker container inspect multinode-225207-m02 --format={{.State.Status}}
	I1119 02:18:44.836883  163011 status.go:371] multinode-225207-m02 host status = "Running" (err=<nil>)
	I1119 02:18:44.836905  163011 host.go:66] Checking if "multinode-225207-m02" exists ...
	I1119 02:18:44.837135  163011 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-225207-m02
	I1119 02:18:44.856690  163011 host.go:66] Checking if "multinode-225207-m02" exists ...
	I1119 02:18:44.856924  163011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:18:44.856956  163011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-225207-m02
	I1119 02:18:44.876246  163011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/multinode-225207-m02/id_rsa Username:docker}
	I1119 02:18:44.968590  163011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:18:44.981248  163011 status.go:176] multinode-225207-m02 status: &{Name:multinode-225207-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:18:44.981281  163011 status.go:174] checking status of multinode-225207-m03 ...
	I1119 02:18:44.981533  163011 cli_runner.go:164] Run: docker container inspect multinode-225207-m03 --format={{.State.Status}}
	I1119 02:18:45.000867  163011 status.go:371] multinode-225207-m03 host status = "Stopped" (err=<nil>)
	I1119 02:18:45.000887  163011 status.go:384] host is not running, skipping remaining checks
	I1119 02:18:45.000901  163011 status.go:176] multinode-225207-m03 status: &{Name:multinode-225207-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 node start m03 -v=5 --alsologtostderr
E1119 02:18:51.238294   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-225207 node start m03 -v=5 --alsologtostderr: (6.27612888s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (71.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-225207
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-225207
E1119 02:18:54.422061   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-225207: (25.090024238s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-225207 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-225207 --wait=true -v=5 --alsologtostderr: (46.421060244s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-225207
--- PASS: TestMultiNode/serial/RestartKeepsNodes (71.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-225207 node delete m03: (4.655671408s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 stop
E1119 02:20:14.305742   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-225207 stop: (23.815240173s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-225207 status: exit status 7 (96.65386ms)

                                                
                                                
-- stdout --
	multinode-225207
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-225207-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-225207 status --alsologtostderr: exit status 7 (98.627565ms)

                                                
                                                
-- stdout --
	multinode-225207
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-225207-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:20:32.854027  172702 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:20:32.854144  172702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:20:32.854153  172702 out.go:374] Setting ErrFile to fd 2...
	I1119 02:20:32.854156  172702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:20:32.854375  172702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:20:32.854540  172702 out.go:368] Setting JSON to false
	I1119 02:20:32.854569  172702 mustload.go:66] Loading cluster: multinode-225207
	I1119 02:20:32.854677  172702 notify.go:221] Checking for updates...
	I1119 02:20:32.854940  172702 config.go:182] Loaded profile config "multinode-225207": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:20:32.854953  172702 status.go:174] checking status of multinode-225207 ...
	I1119 02:20:32.855356  172702 cli_runner.go:164] Run: docker container inspect multinode-225207 --format={{.State.Status}}
	I1119 02:20:32.873927  172702 status.go:371] multinode-225207 host status = "Stopped" (err=<nil>)
	I1119 02:20:32.873948  172702 status.go:384] host is not running, skipping remaining checks
	I1119 02:20:32.873954  172702 status.go:176] multinode-225207 status: &{Name:multinode-225207 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:20:32.873973  172702 status.go:174] checking status of multinode-225207-m02 ...
	I1119 02:20:32.874231  172702 cli_runner.go:164] Run: docker container inspect multinode-225207-m02 --format={{.State.Status}}
	I1119 02:20:32.892824  172702 status.go:371] multinode-225207-m02 host status = "Stopped" (err=<nil>)
	I1119 02:20:32.892877  172702 status.go:384] host is not running, skipping remaining checks
	I1119 02:20:32.892895  172702 status.go:176] multinode-225207-m02 status: &{Name:multinode-225207-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-225207 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-225207 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (43.897564049s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225207 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.51s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-225207
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-225207-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-225207-m02 --driver=docker  --container-runtime=containerd: exit status 14 (77.053455ms)

                                                
                                                
-- stdout --
	* [multinode-225207-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-225207-m02' is duplicated with machine name 'multinode-225207-m02' in profile 'multinode-225207'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-225207-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-225207-m03 --driver=docker  --container-runtime=containerd: (20.36946365s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-225207
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-225207: exit status 80 (298.043758ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-225207 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-225207-m03 already exists in multinode-225207-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-225207-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-225207-m03: (2.423546832s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.23s)

                                                
                                    
x
+
TestPreload (113.82s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-398864 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-398864 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (47.163653959s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-398864 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-398864 image pull gcr.io/k8s-minikube/busybox: (2.820339425s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-398864
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-398864: (5.747013163s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-398864 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-398864 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (55.36644304s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-398864 image list
helpers_test.go:175: Cleaning up "test-preload-398864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-398864
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-398864: (2.495013496s)
--- PASS: TestPreload (113.82s)

                                                
                                    
x
+
TestScheduledStopUnix (94.91s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-462070 --memory=3072 --driver=docker  --container-runtime=containerd
E1119 02:23:51.241340   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:23:54.424029   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-462070 --memory=3072 --driver=docker  --container-runtime=containerd: (19.011498551s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-462070 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 02:23:57.794239  190783 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:23:57.794614  190783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:23:57.794625  190783 out.go:374] Setting ErrFile to fd 2...
	I1119 02:23:57.794629  190783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:23:57.794862  190783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:23:57.795102  190783 out.go:368] Setting JSON to false
	I1119 02:23:57.795201  190783 mustload.go:66] Loading cluster: scheduled-stop-462070
	I1119 02:23:57.795534  190783 config.go:182] Loaded profile config "scheduled-stop-462070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:23:57.795597  190783 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/config.json ...
	I1119 02:23:57.795830  190783 mustload.go:66] Loading cluster: scheduled-stop-462070
	I1119 02:23:57.795976  190783 config.go:182] Loaded profile config "scheduled-stop-462070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-462070 -n scheduled-stop-462070
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-462070 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 02:23:58.188921  190949 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:23:58.189053  190949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:23:58.189065  190949 out.go:374] Setting ErrFile to fd 2...
	I1119 02:23:58.189072  190949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:23:58.189313  190949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:23:58.189556  190949 out.go:368] Setting JSON to false
	I1119 02:23:58.189745  190949 daemonize_unix.go:73] killing process 190817 as it is an old scheduled stop
	I1119 02:23:58.189859  190949 mustload.go:66] Loading cluster: scheduled-stop-462070
	I1119 02:23:58.190391  190949 config.go:182] Loaded profile config "scheduled-stop-462070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:23:58.190500  190949 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/config.json ...
	I1119 02:23:58.190753  190949 mustload.go:66] Loading cluster: scheduled-stop-462070
	I1119 02:23:58.190907  190949 config.go:182] Loaded profile config "scheduled-stop-462070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1119 02:23:58.196327   14657 retry.go:31] will retry after 100.856µs: open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/pid: no such file or directory
I1119 02:23:58.197514   14657 retry.go:31] will retry after 192.079µs: open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/pid: no such file or directory
I1119 02:23:58.198668   14657 retry.go:31] will retry after 119.242µs: open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/pid: no such file or directory
I1119 02:23:58.199823   14657 retry.go:31] will retry after 191.426µs: open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/pid: no such file or directory
I1119 02:23:58.200959   14657 retry.go:31] will retry after 488.506µs: open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/pid: no such file or directory
I1119 02:23:58.202123   14657 retry.go:31] will retry after 833.063µs: open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/pid: no such file or directory
I1119 02:23:58.203290   14657 retry.go:31] will retry after 1.534178ms: open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/pid: no such file or directory
I1119 02:23:58.205616   14657 retry.go:31] will retry after 1.014845ms: open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/pid: no such file or directory
I1119 02:23:58.206745   14657 retry.go:31] will retry after 2.120171ms: open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/pid: no such file or directory
I1119 02:23:58.209935   14657 retry.go:31] will retry after 2.2685ms: open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/pid: no such file or directory
I1119 02:23:58.213199   14657 retry.go:31] will retry after 6.351762ms: open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/pid: no such file or directory
I1119 02:23:58.220462   14657 retry.go:31] will retry after 9.800749ms: open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/pid: no such file or directory
I1119 02:23:58.230757   14657 retry.go:31] will retry after 8.091873ms: open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/pid: no such file or directory
I1119 02:23:58.239175   14657 retry.go:31] will retry after 16.734341ms: open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/pid: no such file or directory
I1119 02:23:58.256518   14657 retry.go:31] will retry after 20.839469ms: open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/pid: no such file or directory
I1119 02:23:58.277827   14657 retry.go:31] will retry after 50.287934ms: open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-462070 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-462070 -n scheduled-stop-462070
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-462070
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-462070 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 02:24:24.107625  191822 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:24:24.107890  191822 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:24:24.107900  191822 out.go:374] Setting ErrFile to fd 2...
	I1119 02:24:24.107906  191822 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:24:24.108129  191822 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:24:24.108414  191822 out.go:368] Setting JSON to false
	I1119 02:24:24.108517  191822 mustload.go:66] Loading cluster: scheduled-stop-462070
	I1119 02:24:24.108831  191822 config.go:182] Loaded profile config "scheduled-stop-462070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:24:24.108903  191822 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/scheduled-stop-462070/config.json ...
	I1119 02:24:24.109104  191822 mustload.go:66] Loading cluster: scheduled-stop-462070
	I1119 02:24:24.109224  191822 config.go:182] Loaded profile config "scheduled-stop-462070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-462070
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-462070: exit status 7 (80.160903ms)

                                                
                                                
-- stdout --
	scheduled-stop-462070
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-462070 -n scheduled-stop-462070
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-462070 -n scheduled-stop-462070: exit status 7 (79.022652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-462070" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-462070
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-462070: (4.354980687s)
--- PASS: TestScheduledStopUnix (94.91s)

                                                
                                    
x
+
TestInsufficientStorage (9.41s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-676225 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
E1119 02:25:17.487539   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-676225 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (6.912060398s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"322a3538-e7e9-4ec4-8565-4e809c5844b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-676225] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ff2a61a-4db8-4bdd-b186-66f39c018de9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21924"}}
	{"specversion":"1.0","id":"9472f465-c534-4bc9-9e2a-bdbe9d09ad04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"134f4b49-9fb7-4fec-8592-66387ee7c940","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig"}}
	{"specversion":"1.0","id":"f1f141c1-8033-4c9a-907f-c4d9b61949b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube"}}
	{"specversion":"1.0","id":"c94e5ab7-d71d-4587-a69c-56f20591aaba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"abe00aec-58f1-45e5-a8fc-f2a1a87fe4f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"98762e87-9883-4c45-b6c2-72b81a97643c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"68306c23-36e2-470c-9f00-fe76abd8dc8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4273c6a3-d9ca-4052-a0c7-b6c7f4267ae1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae049a43-5ec2-4007-af8e-c0728f538432","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2a3439d5-6377-4007-8ac0-39eded116255","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-676225\" primary control-plane node in \"insufficient-storage-676225\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a815cc5a-062c-4e06-9872-03c56bc3196c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763507788-21924 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f100ef07-6bc2-437b-9a86-a582bf6bbf71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7d99bb84-73af-4dd8-9cac-25640376cb3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-676225 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-676225 --output=json --layout=cluster: exit status 7 (302.83501ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-676225","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-676225","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1119 02:25:20.832380  194063 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-676225" does not appear in /home/jenkins/minikube-integration/21924-11107/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-676225 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-676225 --output=json --layout=cluster: exit status 7 (297.919391ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-676225","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-676225","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1119 02:25:21.131007  194176 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-676225" does not appear in /home/jenkins/minikube-integration/21924-11107/kubeconfig
	E1119 02:25:21.142038  194176 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/insufficient-storage-676225/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-676225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-676225
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-676225: (1.896210695s)
--- PASS: TestInsufficientStorage (9.41s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (51.83s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1820528700 start -p running-upgrade-962191 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1820528700 start -p running-upgrade-962191 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (26.472867422s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-962191 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-962191 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (20.771816263s)
helpers_test.go:175: Cleaning up "running-upgrade-962191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-962191
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-962191: (1.961129377s)
--- PASS: TestRunningBinaryUpgrade (51.83s)

                                                
                                    
x
+
TestMissingContainerUpgrade (138.05s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.322561786 start -p missing-upgrade-865464 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.322561786 start -p missing-upgrade-865464 --memory=3072 --driver=docker  --container-runtime=containerd: (1m10.48474564s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-865464
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-865464: (1.322573015s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-865464
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-865464 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-865464 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.574090032s)
helpers_test.go:175: Cleaning up "missing-upgrade-865464" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-865464
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-865464: (1.975307001s)
--- PASS: TestMissingContainerUpgrade (138.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (102.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.544310422 start -p stopped-upgrade-854620 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.544310422 start -p stopped-upgrade-854620 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (1m11.217386326s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.544310422 -p stopped-upgrade-854620 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.544310422 -p stopped-upgrade-854620 stop: (1.830932971s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-854620 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-854620 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (29.275127823s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (102.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-212776 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-212776 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (207.78144ms)

                                                
                                                
-- stdout --
	* [false-212776] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:26:24.017397  207497 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:26:24.017670  207497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:26:24.017681  207497 out.go:374] Setting ErrFile to fd 2...
	I1119 02:26:24.017685  207497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:26:24.017901  207497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
	I1119 02:26:24.018353  207497 out.go:368] Setting JSON to false
	I1119 02:26:24.019548  207497 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4124,"bootTime":1763515060,"procs":278,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 02:26:24.019639  207497 start.go:143] virtualization: kvm guest
	I1119 02:26:24.021700  207497 out.go:179] * [false-212776] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 02:26:24.023110  207497 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:26:24.023114  207497 notify.go:221] Checking for updates...
	I1119 02:26:24.025559  207497 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:26:24.026895  207497 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	I1119 02:26:24.028235  207497 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	I1119 02:26:24.029389  207497 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 02:26:24.030632  207497 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:26:24.032570  207497 config.go:182] Loaded profile config "kubernetes-upgrade-896338": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1119 02:26:24.032720  207497 config.go:182] Loaded profile config "missing-upgrade-865464": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1119 02:26:24.032870  207497 config.go:182] Loaded profile config "stopped-upgrade-854620": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1119 02:26:24.033010  207497 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:26:24.062399  207497 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 02:26:24.062502  207497 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:26:24.144154  207497 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-19 02:26:24.124903695 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 02:26:24.144310  207497 docker.go:319] overlay module found
	I1119 02:26:24.146239  207497 out.go:179] * Using the docker driver based on user configuration
	I1119 02:26:24.147441  207497 start.go:309] selected driver: docker
	I1119 02:26:24.147463  207497 start.go:930] validating driver "docker" against <nil>
	I1119 02:26:24.147478  207497 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:26:24.151853  207497 out.go:203] 
	W1119 02:26:24.153294  207497 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1119 02:26:24.155333  207497 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-212776 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-212776

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-212776

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-212776

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-212776

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-212776

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-212776

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-212776

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-212776

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-212776

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-212776

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-212776

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-212776" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-212776" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:26:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-896338
contexts:
- context:
cluster: kubernetes-upgrade-896338
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:26:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-896338
name: kubernetes-upgrade-896338
current-context: kubernetes-upgrade-896338
kind: Config
users:
- name: kubernetes-upgrade-896338
user:
client-certificate: /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/client.crt
client-key: /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-212776

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212776"

                                                
                                                
----------------------- debugLogs end: false-212776 [took: 3.949231694s] --------------------------------
helpers_test.go:175: Cleaning up "false-212776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-212776
--- PASS: TestNetworkPlugins/group/false (4.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-854620
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-854620: (1.341321174s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                    
x
+
TestPause/serial/Start (41.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-290512 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-290512 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (41.040938805s)
--- PASS: TestPause/serial/Start (41.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-676864 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-676864 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (82.469396ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-676864] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (22.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-676864 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-676864 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.626237883s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-676864 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (22.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-676864 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-676864 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (13.645330926s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-676864 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-676864 status -o json: exit status 2 (370.496453ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-676864","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-676864
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-676864: (2.112715181s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.13s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.84s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-290512 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-290512 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.832488784s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.84s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-290512 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-290512 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-290512 --output=json --layout=cluster: exit status 2 (400.630275ms)

                                                
                                                
-- stdout --
	{"Name":"pause-290512","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-290512","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-290512 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-290512 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.21s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-290512 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-290512 --alsologtostderr -v=5: (3.205415155s)
--- PASS: TestPause/serial/DeletePaused (3.21s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.71s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-290512
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-290512: exit status 1 (21.789334ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-290512: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-676864 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-676864 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.97317147s)
--- PASS: TestNoKubernetes/serial/Start (7.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-676864 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-676864 "sudo systemctl is-active --quiet service kubelet": exit status 1 (325.425403ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-676864
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-676864: (1.323122976s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-676864 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-676864 --driver=docker  --container-runtime=containerd: (7.105265354s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-676864 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-676864 "sudo systemctl is-active --quiet service kubelet": exit status 1 (337.229386ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-212776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-212776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (41.788059288s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-212776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1119 02:28:51.238496   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:28:54.421218   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/functional-266785/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-212776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (42.361065096s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-212776 "pgrep -a kubelet"
I1119 02:29:14.280897   14657 config.go:182] Loaded profile config "auto-212776": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-212776 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2kbtb" [87489ccb-174b-4882-840c-b08dda660b2c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2kbtb" [87489ccb-174b-4882-840c-b08dda660b2c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003824447s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-212776 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-212776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-212776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-fz9hg" [4b6eda9f-c519-49dc-b7fb-6dd031985227] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003203987s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-212776 "pgrep -a kubelet"
I1119 02:29:39.056979   14657 config.go:182] Loaded profile config "kindnet-212776": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-212776 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p5vs2" [b93c3a0d-e47b-47b2-bf0e-940cd530759b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-p5vs2" [b93c3a0d-e47b-47b2-bf0e-940cd530759b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.123470753s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (51.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-212776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-212776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (51.36064758s)
--- PASS: TestNetworkPlugins/group/calico/Start (51.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-212776 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-212776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-212776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (50.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-212776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-212776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (50.899909174s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (50.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-b4pgg" [b13f678e-89bc-4e65-9a3c-40360ca81d93] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005149556s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-212776 "pgrep -a kubelet"
I1119 02:30:40.498060   14657 config.go:182] Loaded profile config "calico-212776": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-212776 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bvhqp" [179ac669-5377-46e8-ba81-676f15b2b6c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bvhqp" [179ac669-5377-46e8-ba81-676f15b2b6c2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.004355207s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-212776 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-212776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-212776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-212776 "pgrep -a kubelet"
I1119 02:30:59.195821   14657 config.go:182] Loaded profile config "custom-flannel-212776": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-212776 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2z9jc" [7c5ddb6a-9760-4b54-a1ed-d1742dff8cee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2z9jc" [7c5ddb6a-9760-4b54-a1ed-d1742dff8cee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.003745715s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-212776 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-212776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-212776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-212776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-212776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m6.577799916s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-212776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-212776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (51.379323149s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-212776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-212776 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m3.080349427s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-212776 "pgrep -a kubelet"
I1119 02:32:15.939869   14657 config.go:182] Loaded profile config "enable-default-cni-212776": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-212776 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-54d2p" [02c0c1e1-a3b1-4ec6-9502-7d5e35aefb31] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-54d2p" [02c0c1e1-a3b1-4ec6-9502-7d5e35aefb31] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004809482s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-dh868" [396a2018-45e4-4708-a6c7-f9a9b15b8da4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004480128s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-212776 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-212776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-212776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-212776 "pgrep -a kubelet"
I1119 02:32:26.215559   14657 config.go:182] Loaded profile config "flannel-212776": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-212776 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cb9zl" [26ca7b95-a77d-42c2-a8eb-e2682b35b879] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cb9zl" [26ca7b95-a77d-42c2-a8eb-e2682b35b879] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.003835556s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-212776 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-212776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-212776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (52.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-691094 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-691094 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (52.33224716s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (52.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-212776 "pgrep -a kubelet"
I1119 02:32:46.724774   14657 config.go:182] Loaded profile config "bridge-212776": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-212776 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hgb7k" [f0f01970-6f06-4adb-8614-5777a42d7880] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hgb7k" [f0f01970-6f06-4adb-8614-5777a42d7880] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004522419s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (53.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-483142 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-483142 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (53.811829602s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (53.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-212776 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-212776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-212776 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-168452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-168452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (41.942308577s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-691094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-691094 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-691094 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-691094 --alsologtostderr -v=3: (12.064202835s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-483142 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-483142 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-483142 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-483142 --alsologtostderr -v=3: (12.148110533s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-691094 -n old-k8s-version-691094
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-691094 -n old-k8s-version-691094: exit status 7 (83.297021ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-691094 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-691094 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-691094 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (46.9901106s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-691094 -n old-k8s-version-691094
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-483142 -n no-preload-483142
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-483142 -n no-preload-483142: exit status 7 (116.309092ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-483142 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (48.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-483142 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-483142 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (47.821046704s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-483142 -n no-preload-483142
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (48.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-168452 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1119 02:34:17.038472   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/auto-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-168452 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-168452 --alsologtostderr -v=3
E1119 02:34:19.599854   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/auto-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:24.721262   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/auto-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-168452 --alsologtostderr -v=3: (12.125859718s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-168452 -n embed-certs-168452
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-168452 -n embed-certs-168452: exit status 7 (87.100584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-168452 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-168452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1119 02:34:32.756689   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kindnet-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:32.763075   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kindnet-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:32.774495   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kindnet-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:32.795882   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kindnet-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:32.837331   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kindnet-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:32.918746   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kindnet-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:33.080302   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kindnet-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:33.401840   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kindnet-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:34.044108   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kindnet-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:34.963142   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/auto-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:35.325816   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kindnet-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:37.888039   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kindnet-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:43.010065   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kindnet-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-168452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (44.26319561s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-168452 -n embed-certs-168452
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-6nchl" [875da40e-c420-42fd-8447-29797f6570d1] Running
E1119 02:34:53.251769   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kindnet-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:34:55.445201   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/auto-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004205528s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-6nchl" [875da40e-c420-42fd-8447-29797f6570d1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004271787s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-691094 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-691094 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-691094 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-691094 -n old-k8s-version-691094
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-691094 -n old-k8s-version-691094: exit status 2 (323.723359ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-691094 -n old-k8s-version-691094
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-691094 -n old-k8s-version-691094: exit status 2 (323.966954ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-691094 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-691094 -n old-k8s-version-691094
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-691094 -n old-k8s-version-691094
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fbmjf" [964d4e86-ccc6-46cb-b670-9c00a27be68f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003996138s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-543625 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-543625 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (42.402003197s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fbmjf" [964d4e86-ccc6-46cb-b670-9c00a27be68f] Running
E1119 02:35:13.733060   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kindnet-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004318205s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-483142 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dcdgw" [0eb99483-e4d0-4874-9f8e-d5075a50b48e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003855195s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-483142 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-483142 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-483142 -n no-preload-483142
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-483142 -n no-preload-483142: exit status 2 (355.308648ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-483142 -n no-preload-483142
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-483142 -n no-preload-483142: exit status 2 (331.622342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-483142 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-483142 -n no-preload-483142
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-483142 -n no-preload-483142
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-239505 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-239505 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (26.221226372s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dcdgw" [0eb99483-e4d0-4874-9f8e-d5075a50b48e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004761168s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-168452 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-168452 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-168452 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-168452 -n embed-certs-168452
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-168452 -n embed-certs-168452: exit status 2 (351.085753ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-168452 -n embed-certs-168452
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-168452 -n embed-certs-168452: exit status 2 (370.137053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-168452 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-168452 -n embed-certs-168452
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-168452 -n embed-certs-168452
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-239505 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-239505 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-239505 --alsologtostderr -v=3: (1.298386203s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-239505 -n newest-cni-239505
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-239505 -n newest-cni-239505: exit status 7 (80.878519ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-239505 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-239505 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-239505 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (9.568247803s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-239505 -n newest-cni-239505
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-239505 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-239505 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-239505 -n newest-cni-239505
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-239505 -n newest-cni-239505: exit status 2 (314.141458ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-239505 -n newest-cni-239505
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-239505 -n newest-cni-239505: exit status 2 (315.105099ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-239505 --alsologtostderr -v=1
E1119 02:35:59.379678   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/custom-flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:35:59.386715   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/custom-flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:35:59.398206   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/custom-flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-239505 -n newest-cni-239505
E1119 02:35:59.419832   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/custom-flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:35:59.461238   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/custom-flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:35:59.542942   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/custom-flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:35:59.705032   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/custom-flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-239505 -n newest-cni-239505
E1119 02:36:00.027103   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/custom-flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-543625 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-543625 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-543625 --alsologtostderr -v=3
E1119 02:36:09.635297   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/custom-flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:36:15.169327   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/calico-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-543625 --alsologtostderr -v=3: (12.037604368s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-543625 -n default-k8s-diff-port-543625
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-543625 -n default-k8s-diff-port-543625: exit status 7 (79.385762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-543625 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-543625 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1119 02:36:19.876763   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/custom-flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:36:40.358309   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/custom-flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:36:54.307585   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/addons-168589/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:36:56.131050   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/calico-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:36:58.329512   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/auto-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-543625 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (50.476403316s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-543625 -n default-k8s-diff-port-543625
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z9p4k" [54232140-b8a1-4480-baba-963697032940] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003203307s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z9p4k" [54232140-b8a1-4480-baba-963697032940] Running
E1119 02:37:16.116530   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/enable-default-cni-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:16.123005   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/enable-default-cni-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:16.134492   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/enable-default-cni-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:16.156007   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/enable-default-cni-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:16.197520   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/enable-default-cni-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:16.279098   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/enable-default-cni-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:16.440718   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/enable-default-cni-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:16.616704   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kindnet-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:16.762316   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/enable-default-cni-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:17.403708   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/enable-default-cni-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:18.685734   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/enable-default-cni-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:19.915274   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:19.921670   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:19.933075   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:19.954580   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:19.996017   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:20.077653   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:20.239241   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003764949s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-543625 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E1119 02:37:20.561277   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-543625 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-543625 --alsologtostderr -v=1
E1119 02:37:21.202611   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:21.247145   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/enable-default-cni-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:37:21.320529   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/custom-flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-543625 -n default-k8s-diff-port-543625
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-543625 -n default-k8s-diff-port-543625: exit status 2 (315.130386ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-543625 -n default-k8s-diff-port-543625
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-543625 -n default-k8s-diff-port-543625: exit status 2 (310.529932ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-543625 --alsologtostderr -v=1
E1119 02:37:22.484296   14657 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/flannel-212776/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-543625 -n default-k8s-diff-port-543625
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-543625 -n default-k8s-diff-port-543625
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.71s)

                                                
                                    

Test skip (26/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-212776 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-212776

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-212776

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-212776

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-212776

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-212776

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-212776

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-212776

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-212776

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-212776

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-212776

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-212776

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-212776" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-212776" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:26:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-896338
contexts:
- context:
cluster: kubernetes-upgrade-896338
user: kubernetes-upgrade-896338
name: kubernetes-upgrade-896338
current-context: kubernetes-upgrade-896338
kind: Config
users:
- name: kubernetes-upgrade-896338
user:
client-certificate: /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/client.crt
client-key: /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-212776

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212776"

                                                
                                                
----------------------- debugLogs end: kubenet-212776 [took: 4.135037854s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-212776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-212776
--- SKIP: TestNetworkPlugins/group/kubenet (4.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-212776 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-212776

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-212776

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-212776

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-212776

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-212776

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-212776

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-212776

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-212776

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-212776

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-212776

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-212776

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-212776" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-212776

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-212776

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-212776

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-212776

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-212776" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-212776" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:26:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-896338
contexts:
- context:
cluster: kubernetes-upgrade-896338
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:26:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-896338
name: kubernetes-upgrade-896338
current-context: kubernetes-upgrade-896338
kind: Config
users:
- name: kubernetes-upgrade-896338
user:
client-certificate: /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/client.crt
client-key: /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-212776

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-212776" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212776"

                                                
                                                
----------------------- debugLogs end: cilium-212776 [took: 5.142429983s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-212776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-212776
--- SKIP: TestNetworkPlugins/group/cilium (5.30s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-433931" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-433931
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard