Test Report: Docker_Linux_docker_arm64 19649

                    
                      32fce3c1cb58db02ee1cd4b36165a584c8a30f83:2024-09-16:36244
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 73.74
x
+
TestAddons/parallel/Registry (73.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.324492ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-bbjqg" [abab3938-5ca1-4f67-bec8-0f5518fa637b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003379897s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5lk22" [095802f7-441a-4970-b3a6-8d88eab7bb43] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003460049s
addons_test.go:342: (dbg) Run:  kubectl --context addons-723934 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-723934 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-723934 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.127646282s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-723934 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-723934 ip
2024/09/16 19:14:46 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-723934 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-723934
helpers_test.go:235: (dbg) docker inspect addons-723934:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c5addb708592fdfa77aa7b7d08288cb7766d32447d4228cc00a52493d87e6ee7",
	        "Created": "2024-09-16T19:01:28.975285605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 574095,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T19:01:29.121962629Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:735d22f77ce2bf9e02c77058920b4d1610fffc1af6c5e42bd1f17e7556552aac",
	        "ResolvConfPath": "/var/lib/docker/containers/c5addb708592fdfa77aa7b7d08288cb7766d32447d4228cc00a52493d87e6ee7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c5addb708592fdfa77aa7b7d08288cb7766d32447d4228cc00a52493d87e6ee7/hostname",
	        "HostsPath": "/var/lib/docker/containers/c5addb708592fdfa77aa7b7d08288cb7766d32447d4228cc00a52493d87e6ee7/hosts",
	        "LogPath": "/var/lib/docker/containers/c5addb708592fdfa77aa7b7d08288cb7766d32447d4228cc00a52493d87e6ee7/c5addb708592fdfa77aa7b7d08288cb7766d32447d4228cc00a52493d87e6ee7-json.log",
	        "Name": "/addons-723934",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-723934:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-723934",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e5bb71094cbea80bd39f51946985e6dc6ffba12013d9045cc28683765da2967b-init/diff:/var/lib/docker/overlay2/edb671577cc764a521e14f43310b7030b6a96ca4c77bc20e20db626401d5e11f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e5bb71094cbea80bd39f51946985e6dc6ffba12013d9045cc28683765da2967b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e5bb71094cbea80bd39f51946985e6dc6ffba12013d9045cc28683765da2967b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e5bb71094cbea80bd39f51946985e6dc6ffba12013d9045cc28683765da2967b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-723934",
	                "Source": "/var/lib/docker/volumes/addons-723934/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-723934",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-723934",
	                "name.minikube.sigs.k8s.io": "addons-723934",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "74533e596de784238eedb4178994f4fd962f9cf6de51d13aad0f9d1550f4175e",
	            "SandboxKey": "/var/run/docker/netns/74533e596de7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33500"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33501"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33502"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-723934": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8ff731a469a48875fa8d49e7f69a2690918f7b715af603eb4b78d05debe3a352",
	                    "EndpointID": "900617768dc70df1a436c4f4abbf484a6092d036b71fa9c6472ee51fa2be8657",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-723934",
	                        "c5addb708592"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-723934 -n addons-723934
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-723934 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-723934 logs -n 25: (1.241797449s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-555988   | jenkins | v1.34.0 | 16 Sep 24 19:00 UTC |                     |
	|         | -p download-only-555988              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 19:00 UTC | 16 Sep 24 19:00 UTC |
	| delete  | -p download-only-555988              | download-only-555988   | jenkins | v1.34.0 | 16 Sep 24 19:00 UTC | 16 Sep 24 19:00 UTC |
	| start   | -o=json --download-only              | download-only-078157   | jenkins | v1.34.0 | 16 Sep 24 19:00 UTC |                     |
	|         | -p download-only-078157              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 19:01 UTC | 16 Sep 24 19:01 UTC |
	| delete  | -p download-only-078157              | download-only-078157   | jenkins | v1.34.0 | 16 Sep 24 19:01 UTC | 16 Sep 24 19:01 UTC |
	| delete  | -p download-only-555988              | download-only-555988   | jenkins | v1.34.0 | 16 Sep 24 19:01 UTC | 16 Sep 24 19:01 UTC |
	| delete  | -p download-only-078157              | download-only-078157   | jenkins | v1.34.0 | 16 Sep 24 19:01 UTC | 16 Sep 24 19:01 UTC |
	| start   | --download-only -p                   | download-docker-931427 | jenkins | v1.34.0 | 16 Sep 24 19:01 UTC |                     |
	|         | download-docker-931427               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-931427            | download-docker-931427 | jenkins | v1.34.0 | 16 Sep 24 19:01 UTC | 16 Sep 24 19:01 UTC |
	| start   | --download-only -p                   | binary-mirror-053813   | jenkins | v1.34.0 | 16 Sep 24 19:01 UTC |                     |
	|         | binary-mirror-053813                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36551               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-053813              | binary-mirror-053813   | jenkins | v1.34.0 | 16 Sep 24 19:01 UTC | 16 Sep 24 19:01 UTC |
	| addons  | disable dashboard -p                 | addons-723934          | jenkins | v1.34.0 | 16 Sep 24 19:01 UTC |                     |
	|         | addons-723934                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-723934          | jenkins | v1.34.0 | 16 Sep 24 19:01 UTC |                     |
	|         | addons-723934                        |                        |         |         |                     |                     |
	| start   | -p addons-723934 --wait=true         | addons-723934          | jenkins | v1.34.0 | 16 Sep 24 19:01 UTC | 16 Sep 24 19:04 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-723934 addons disable         | addons-723934          | jenkins | v1.34.0 | 16 Sep 24 19:05 UTC | 16 Sep 24 19:05 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-723934 addons                 | addons-723934          | jenkins | v1.34.0 | 16 Sep 24 19:14 UTC | 16 Sep 24 19:14 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-723934 addons                 | addons-723934          | jenkins | v1.34.0 | 16 Sep 24 19:14 UTC | 16 Sep 24 19:14 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-723934 addons                 | addons-723934          | jenkins | v1.34.0 | 16 Sep 24 19:14 UTC | 16 Sep 24 19:14 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-723934          | jenkins | v1.34.0 | 16 Sep 24 19:14 UTC | 16 Sep 24 19:14 UTC |
	|         | addons-723934                        |                        |         |         |                     |                     |
	| ip      | addons-723934 ip                     | addons-723934          | jenkins | v1.34.0 | 16 Sep 24 19:14 UTC | 16 Sep 24 19:14 UTC |
	| addons  | addons-723934 addons disable         | addons-723934          | jenkins | v1.34.0 | 16 Sep 24 19:14 UTC | 16 Sep 24 19:14 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 19:01:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 19:01:05.008271  573596 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:01:05.008487  573596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:01:05.008512  573596 out.go:358] Setting ErrFile to fd 2...
	I0916 19:01:05.008531  573596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:01:05.008872  573596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-567461/.minikube/bin
	I0916 19:01:05.009489  573596 out.go:352] Setting JSON to false
	I0916 19:01:05.010565  573596 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9803,"bootTime":1726503462,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0916 19:01:05.010712  573596 start.go:139] virtualization:  
	I0916 19:01:05.014129  573596 out.go:177] * [addons-723934] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0916 19:01:05.017547  573596 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 19:01:05.017621  573596 notify.go:220] Checking for updates...
	I0916 19:01:05.020105  573596 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 19:01:05.022047  573596 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-567461/kubeconfig
	I0916 19:01:05.023772  573596 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-567461/.minikube
	I0916 19:01:05.025504  573596 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0916 19:01:05.027722  573596 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 19:01:05.032634  573596 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 19:01:05.064656  573596 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 19:01:05.064804  573596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:01:05.126722  573596 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-16 19:01:05.116755413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:01:05.126901  573596 docker.go:318] overlay module found
	I0916 19:01:05.129020  573596 out.go:177] * Using the docker driver based on user configuration
	I0916 19:01:05.130333  573596 start.go:297] selected driver: docker
	I0916 19:01:05.130353  573596 start.go:901] validating driver "docker" against <nil>
	I0916 19:01:05.130372  573596 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 19:01:05.131121  573596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:01:05.188256  573596 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-16 19:01:05.178615819 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:01:05.188504  573596 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 19:01:05.188745  573596 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 19:01:05.189917  573596 out.go:177] * Using Docker driver with root privileges
	I0916 19:01:05.191101  573596 cni.go:84] Creating CNI manager for ""
	I0916 19:01:05.191180  573596 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 19:01:05.191193  573596 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 19:01:05.191277  573596 start.go:340] cluster config:
	{Name:addons-723934 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-723934 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:01:05.193178  573596 out.go:177] * Starting "addons-723934" primary control-plane node in "addons-723934" cluster
	I0916 19:01:05.194445  573596 cache.go:121] Beginning downloading kic base image for docker with docker
	I0916 19:01:05.196321  573596 out.go:177] * Pulling base image v0.0.45-1726481311-19649 ...
	I0916 19:01:05.197867  573596 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 19:01:05.197927  573596 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19649-567461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 19:01:05.197948  573596 cache.go:56] Caching tarball of preloaded images
	I0916 19:01:05.197956  573596 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local docker daemon
	I0916 19:01:05.198030  573596 preload.go:172] Found /home/jenkins/minikube-integration/19649-567461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 19:01:05.198041  573596 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 19:01:05.198398  573596 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/config.json ...
	I0916 19:01:05.198428  573596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/config.json: {Name:mkd63b69c021f79b0c3440b43882d4036ba81ff9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:05.214752  573596 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc to local cache
	I0916 19:01:05.214883  573596 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory
	I0916 19:01:05.214910  573596 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory, skipping pull
	I0916 19:01:05.214919  573596 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc exists in cache, skipping pull
	I0916 19:01:05.214928  573596 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc as a tarball
	I0916 19:01:05.214937  573596 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc from local cache
	I0916 19:01:22.542191  573596 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc from cached tarball
	I0916 19:01:22.542230  573596 cache.go:194] Successfully downloaded all kic artifacts
	I0916 19:01:22.542328  573596 start.go:360] acquireMachinesLock for addons-723934: {Name:mka721fcd2a2a98ad9f24ab3b0b380a25eed9ee9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 19:01:22.542465  573596 start.go:364] duration metric: took 114.393µs to acquireMachinesLock for "addons-723934"
	I0916 19:01:22.542507  573596 start.go:93] Provisioning new machine with config: &{Name:addons-723934 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-723934 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 19:01:22.542584  573596 start.go:125] createHost starting for "" (driver="docker")
	I0916 19:01:22.544383  573596 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0916 19:01:22.544649  573596 start.go:159] libmachine.API.Create for "addons-723934" (driver="docker")
	I0916 19:01:22.544684  573596 client.go:168] LocalClient.Create starting
	I0916 19:01:22.544808  573596 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19649-567461/.minikube/certs/ca.pem
	I0916 19:01:22.804524  573596 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19649-567461/.minikube/certs/cert.pem
	I0916 19:01:22.969856  573596 cli_runner.go:164] Run: docker network inspect addons-723934 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 19:01:22.985417  573596 cli_runner.go:211] docker network inspect addons-723934 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 19:01:22.985499  573596 network_create.go:284] running [docker network inspect addons-723934] to gather additional debugging logs...
	I0916 19:01:22.985522  573596 cli_runner.go:164] Run: docker network inspect addons-723934
	W0916 19:01:23.000996  573596 cli_runner.go:211] docker network inspect addons-723934 returned with exit code 1
	I0916 19:01:23.001038  573596 network_create.go:287] error running [docker network inspect addons-723934]: docker network inspect addons-723934: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-723934 not found
	I0916 19:01:23.001054  573596 network_create.go:289] output of [docker network inspect addons-723934]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-723934 not found
	
	** /stderr **
	I0916 19:01:23.001161  573596 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 19:01:23.019530  573596 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2e480}
	I0916 19:01:23.019603  573596 network_create.go:124] attempt to create docker network addons-723934 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 19:01:23.019659  573596 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-723934 addons-723934
	I0916 19:01:23.103566  573596 network_create.go:108] docker network addons-723934 192.168.49.0/24 created
	I0916 19:01:23.103601  573596 kic.go:121] calculated static IP "192.168.49.2" for the "addons-723934" container
	I0916 19:01:23.103672  573596 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 19:01:23.118111  573596 cli_runner.go:164] Run: docker volume create addons-723934 --label name.minikube.sigs.k8s.io=addons-723934 --label created_by.minikube.sigs.k8s.io=true
	I0916 19:01:23.135403  573596 oci.go:103] Successfully created a docker volume addons-723934
	I0916 19:01:23.135494  573596 cli_runner.go:164] Run: docker run --rm --name addons-723934-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-723934 --entrypoint /usr/bin/test -v addons-723934:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc -d /var/lib
	I0916 19:01:25.190580  573596 cli_runner.go:217] Completed: docker run --rm --name addons-723934-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-723934 --entrypoint /usr/bin/test -v addons-723934:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc -d /var/lib: (2.055027599s)
	I0916 19:01:25.190612  573596 oci.go:107] Successfully prepared a docker volume addons-723934
	I0916 19:01:25.190647  573596 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 19:01:25.190666  573596 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 19:01:25.190744  573596 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19649-567461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-723934:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 19:01:28.898592  573596 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19649-567461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-723934:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc -I lz4 -xf /preloaded.tar -C /extractDir: (3.707799238s)
	I0916 19:01:28.898626  573596 kic.go:203] duration metric: took 3.707955894s to extract preloaded images to volume ...
	W0916 19:01:28.898786  573596 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 19:01:28.898926  573596 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 19:01:28.961121  573596 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-723934 --name addons-723934 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-723934 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-723934 --network addons-723934 --ip 192.168.49.2 --volume addons-723934:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc
	I0916 19:01:29.289704  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Running}}
	I0916 19:01:29.315146  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:01:29.340895  573596 cli_runner.go:164] Run: docker exec addons-723934 stat /var/lib/dpkg/alternatives/iptables
	I0916 19:01:29.411305  573596 oci.go:144] the created container "addons-723934" has a running status.
	I0916 19:01:29.411331  573596 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa...
	I0916 19:01:30.533636  573596 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 19:01:30.552955  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:01:30.570422  573596 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 19:01:30.570443  573596 kic_runner.go:114] Args: [docker exec --privileged addons-723934 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 19:01:30.619254  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:01:30.635523  573596 machine.go:93] provisionDockerMachine start ...
	I0916 19:01:30.635632  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:01:30.653738  573596 main.go:141] libmachine: Using SSH client type: native
	I0916 19:01:30.654026  573596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33499 <nil> <nil>}
	I0916 19:01:30.654044  573596 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 19:01:30.794499  573596 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-723934
	
	I0916 19:01:30.794527  573596 ubuntu.go:169] provisioning hostname "addons-723934"
	I0916 19:01:30.794596  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:01:30.811972  573596 main.go:141] libmachine: Using SSH client type: native
	I0916 19:01:30.812224  573596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33499 <nil> <nil>}
	I0916 19:01:30.812242  573596 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-723934 && echo "addons-723934" | sudo tee /etc/hostname
	I0916 19:01:30.964020  573596 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-723934
	
	I0916 19:01:30.964100  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:01:30.983047  573596 main.go:141] libmachine: Using SSH client type: native
	I0916 19:01:30.983346  573596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33499 <nil> <nil>}
	I0916 19:01:30.983370  573596 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-723934' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-723934/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-723934' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 19:01:31.127603  573596 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 19:01:31.127635  573596 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19649-567461/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-567461/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-567461/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-567461/.minikube}
	I0916 19:01:31.127677  573596 ubuntu.go:177] setting up certificates
	I0916 19:01:31.127689  573596 provision.go:84] configureAuth start
	I0916 19:01:31.127766  573596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-723934
	I0916 19:01:31.146016  573596 provision.go:143] copyHostCerts
	I0916 19:01:31.146127  573596 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-567461/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-567461/.minikube/ca.pem (1078 bytes)
	I0916 19:01:31.146258  573596 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-567461/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-567461/.minikube/cert.pem (1123 bytes)
	I0916 19:01:31.146318  573596 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-567461/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-567461/.minikube/key.pem (1679 bytes)
	I0916 19:01:31.146368  573596 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-567461/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-567461/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-567461/.minikube/certs/ca-key.pem org=jenkins.addons-723934 san=[127.0.0.1 192.168.49.2 addons-723934 localhost minikube]
	I0916 19:01:31.542515  573596 provision.go:177] copyRemoteCerts
	I0916 19:01:31.542595  573596 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 19:01:31.542644  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:01:31.558916  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:01:31.656498  573596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-567461/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 19:01:31.682977  573596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-567461/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 19:01:31.708499  573596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-567461/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 19:01:31.733722  573596 provision.go:87] duration metric: took 606.011202ms to configureAuth
	I0916 19:01:31.733752  573596 ubuntu.go:193] setting minikube options for container-runtime
	I0916 19:01:31.733949  573596 config.go:182] Loaded profile config "addons-723934": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 19:01:31.734015  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:01:31.751323  573596 main.go:141] libmachine: Using SSH client type: native
	I0916 19:01:31.751611  573596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33499 <nil> <nil>}
	I0916 19:01:31.751631  573596 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 19:01:31.895840  573596 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 19:01:31.895862  573596 ubuntu.go:71] root file system type: overlay
	I0916 19:01:31.895984  573596 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 19:01:31.896064  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:01:31.913464  573596 main.go:141] libmachine: Using SSH client type: native
	I0916 19:01:31.913710  573596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33499 <nil> <nil>}
	I0916 19:01:31.913801  573596 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 19:01:32.070021  573596 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 19:01:32.070124  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:01:32.088096  573596 main.go:141] libmachine: Using SSH client type: native
	I0916 19:01:32.088401  573596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33499 <nil> <nil>}
	I0916 19:01:32.088439  573596 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 19:01:32.870775  573596 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:36.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-16 19:01:32.066426640 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 19:01:32.870890  573596 machine.go:96] duration metric: took 2.235335313s to provisionDockerMachine
	I0916 19:01:32.870917  573596 client.go:171] duration metric: took 10.326222756s to LocalClient.Create
	I0916 19:01:32.870974  573596 start.go:167] duration metric: took 10.326326171s to libmachine.API.Create "addons-723934"
	I0916 19:01:32.871001  573596 start.go:293] postStartSetup for "addons-723934" (driver="docker")
	I0916 19:01:32.871042  573596 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 19:01:32.871149  573596 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 19:01:32.871221  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:01:32.889513  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:01:32.988878  573596 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 19:01:32.992456  573596 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 19:01:32.992500  573596 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 19:01:32.992513  573596 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 19:01:32.992520  573596 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 19:01:32.992530  573596 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-567461/.minikube/addons for local assets ...
	I0916 19:01:32.992624  573596 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-567461/.minikube/files for local assets ...
	I0916 19:01:32.992661  573596 start.go:296] duration metric: took 121.639812ms for postStartSetup
	I0916 19:01:32.993089  573596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-723934
	I0916 19:01:33.018974  573596 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/config.json ...
	I0916 19:01:33.019308  573596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 19:01:33.019371  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:01:33.039374  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:01:33.139971  573596 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 19:01:33.144834  573596 start.go:128] duration metric: took 10.602233318s to createHost
	I0916 19:01:33.144861  573596 start.go:83] releasing machines lock for "addons-723934", held for 10.602379801s
	I0916 19:01:33.144959  573596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-723934
	I0916 19:01:33.161951  573596 ssh_runner.go:195] Run: cat /version.json
	I0916 19:01:33.162011  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:01:33.162326  573596 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 19:01:33.162456  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:01:33.184623  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:01:33.201903  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:01:33.286674  573596 ssh_runner.go:195] Run: systemctl --version
	I0916 19:01:33.439203  573596 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 19:01:33.443507  573596 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 19:01:33.471462  573596 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 19:01:33.471597  573596 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 19:01:33.500992  573596 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 19:01:33.501033  573596 start.go:495] detecting cgroup driver to use...
	I0916 19:01:33.501066  573596 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 19:01:33.501203  573596 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 19:01:33.517889  573596 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 19:01:33.527775  573596 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 19:01:33.537932  573596 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 19:01:33.538029  573596 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 19:01:33.548315  573596 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 19:01:33.558748  573596 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 19:01:33.569082  573596 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 19:01:33.579304  573596 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 19:01:33.588989  573596 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 19:01:33.599288  573596 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 19:01:33.609851  573596 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 19:01:33.620323  573596 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 19:01:33.629392  573596 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 19:01:33.638121  573596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:01:33.730692  573596 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 19:01:33.839552  573596 start.go:495] detecting cgroup driver to use...
	I0916 19:01:33.839647  573596 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 19:01:33.839725  573596 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 19:01:33.856038  573596 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0916 19:01:33.856170  573596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 19:01:33.870204  573596 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 19:01:33.891109  573596 ssh_runner.go:195] Run: which cri-dockerd
	I0916 19:01:33.896403  573596 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 19:01:33.908965  573596 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 19:01:33.928583  573596 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 19:01:34.044866  573596 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 19:01:34.149110  573596 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 19:01:34.149296  573596 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 19:01:34.171676  573596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:01:34.261868  573596 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 19:01:34.548329  573596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 19:01:34.561002  573596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 19:01:34.574573  573596 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 19:01:34.673119  573596 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 19:01:34.774855  573596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:01:34.867824  573596 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 19:01:34.885271  573596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 19:01:34.897596  573596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:01:34.990992  573596 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 19:01:35.072754  573596 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 19:01:35.072921  573596 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 19:01:35.079151  573596 start.go:563] Will wait 60s for crictl version
	I0916 19:01:35.079272  573596 ssh_runner.go:195] Run: which crictl
	I0916 19:01:35.083557  573596 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 19:01:35.127185  573596 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 19:01:35.127317  573596 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 19:01:35.152120  573596 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 19:01:35.180830  573596 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 19:01:35.180992  573596 cli_runner.go:164] Run: docker network inspect addons-723934 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 19:01:35.202860  573596 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 19:01:35.206756  573596 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 19:01:35.219090  573596 kubeadm.go:883] updating cluster {Name:addons-723934 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-723934 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 19:01:35.219213  573596 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 19:01:35.219274  573596 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 19:01:35.238032  573596 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 19:01:35.238054  573596 docker.go:615] Images already preloaded, skipping extraction
	I0916 19:01:35.238116  573596 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 19:01:35.256560  573596 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 19:01:35.256590  573596 cache_images.go:84] Images are preloaded, skipping loading
	I0916 19:01:35.256601  573596 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0916 19:01:35.256703  573596 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-723934 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-723934 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 19:01:35.256787  573596 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 19:01:35.307504  573596 cni.go:84] Creating CNI manager for ""
	I0916 19:01:35.307562  573596 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 19:01:35.307580  573596 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 19:01:35.307605  573596 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-723934 NodeName:addons-723934 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 19:01:35.307758  573596 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-723934"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 19:01:35.307833  573596 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 19:01:35.317050  573596 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 19:01:35.317126  573596 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 19:01:35.326162  573596 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 19:01:35.344794  573596 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 19:01:35.363337  573596 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0916 19:01:35.381795  573596 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 19:01:35.385798  573596 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 19:01:35.396980  573596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:01:35.490015  573596 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 19:01:35.505543  573596 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934 for IP: 192.168.49.2
	I0916 19:01:35.505565  573596 certs.go:194] generating shared ca certs ...
	I0916 19:01:35.505584  573596 certs.go:226] acquiring lock for ca certs: {Name:mkdd10a292ff3ea2dc9c1b4e4435868b0212ff31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:35.505715  573596 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-567461/.minikube/ca.key
	I0916 19:01:36.493348  573596 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-567461/.minikube/ca.crt ...
	I0916 19:01:36.493387  573596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-567461/.minikube/ca.crt: {Name:mk67383b6262a25be4791a9df3acda0f3afd50bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:36.494228  573596 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-567461/.minikube/ca.key ...
	I0916 19:01:36.494266  573596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-567461/.minikube/ca.key: {Name:mk4e21e3e7c12a5487d458f1d47d9e032a0d6e04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:36.494467  573596 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-567461/.minikube/proxy-client-ca.key
	I0916 19:01:36.667652  573596 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-567461/.minikube/proxy-client-ca.crt ...
	I0916 19:01:36.667691  573596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-567461/.minikube/proxy-client-ca.crt: {Name:mk3d0577ba6ffa6ef5461df28e4ce38bfc46d769 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:36.667972  573596 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-567461/.minikube/proxy-client-ca.key ...
	I0916 19:01:36.667989  573596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-567461/.minikube/proxy-client-ca.key: {Name:mkc471ccac4157667a9a0bc08b39d177c34e4ac7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:36.668076  573596 certs.go:256] generating profile certs ...
	I0916 19:01:36.668176  573596 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.key
	I0916 19:01:36.668193  573596 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt with IP's: []
	I0916 19:01:37.006139  573596 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt ...
	I0916 19:01:37.006178  573596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: {Name:mk177df4fb5d8dfcdec147eca816e7e28575db5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:37.006395  573596 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.key ...
	I0916 19:01:37.006405  573596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.key: {Name:mkcf766997949dae945a7899eda26b4f5c5fd1ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:37.007336  573596 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/apiserver.key.55adda8b
	I0916 19:01:37.007384  573596 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/apiserver.crt.55adda8b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 19:01:37.248300  573596 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/apiserver.crt.55adda8b ...
	I0916 19:01:37.248334  573596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/apiserver.crt.55adda8b: {Name:mk1ab995459e87d4d17d791533bfd74d4e5282fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:37.248527  573596 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/apiserver.key.55adda8b ...
	I0916 19:01:37.248542  573596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/apiserver.key.55adda8b: {Name:mk36c947a522f2deba10ab9d8214541e0f48e003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:37.248631  573596 certs.go:381] copying /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/apiserver.crt.55adda8b -> /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/apiserver.crt
	I0916 19:01:37.248710  573596 certs.go:385] copying /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/apiserver.key.55adda8b -> /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/apiserver.key
	I0916 19:01:37.248761  573596 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/proxy-client.key
	I0916 19:01:37.248782  573596 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/proxy-client.crt with IP's: []
	I0916 19:01:38.142932  573596 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/proxy-client.crt ...
	I0916 19:01:38.142966  573596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/proxy-client.crt: {Name:mk39d3cad31267ffc6305fab94704ae633fe9a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:38.143153  573596 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/proxy-client.key ...
	I0916 19:01:38.143169  573596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/proxy-client.key: {Name:mkdbaddfca98c0e1c925b71fc4938ede7daa744a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:38.143368  573596 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-567461/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 19:01:38.143410  573596 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-567461/.minikube/certs/ca.pem (1078 bytes)
	I0916 19:01:38.143439  573596 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-567461/.minikube/certs/cert.pem (1123 bytes)
	I0916 19:01:38.143468  573596 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-567461/.minikube/certs/key.pem (1679 bytes)
	I0916 19:01:38.144063  573596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-567461/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 19:01:38.170222  573596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-567461/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 19:01:38.195345  573596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-567461/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 19:01:38.219800  573596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-567461/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 19:01:38.246463  573596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 19:01:38.272501  573596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 19:01:38.298344  573596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 19:01:38.323073  573596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 19:01:38.347928  573596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-567461/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 19:01:38.373703  573596 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 19:01:38.392346  573596 ssh_runner.go:195] Run: openssl version
	I0916 19:01:38.398018  573596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 19:01:38.409078  573596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:01:38.412697  573596 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:01:38.412792  573596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:01:38.420058  573596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 19:01:38.429554  573596 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 19:01:38.433084  573596 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 19:01:38.433179  573596 kubeadm.go:392] StartCluster: {Name:addons-723934 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-723934 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:01:38.433350  573596 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 19:01:38.453130  573596 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 19:01:38.461928  573596 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 19:01:38.470859  573596 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 19:01:38.470928  573596 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 19:01:38.480106  573596 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 19:01:38.480128  573596 kubeadm.go:157] found existing configuration files:
	
	I0916 19:01:38.480182  573596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 19:01:38.489090  573596 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 19:01:38.489245  573596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 19:01:38.498132  573596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 19:01:38.507689  573596 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 19:01:38.507780  573596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 19:01:38.516893  573596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 19:01:38.527019  573596 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 19:01:38.527124  573596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 19:01:38.536527  573596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 19:01:38.545982  573596 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 19:01:38.546053  573596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 19:01:38.555028  573596 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 19:01:38.613348  573596 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 19:01:38.613656  573596 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 19:01:38.637846  573596 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 19:01:38.637921  573596 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-aws
	I0916 19:01:38.638010  573596 kubeadm.go:310] OS: Linux
	I0916 19:01:38.638096  573596 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 19:01:38.638193  573596 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 19:01:38.638281  573596 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 19:01:38.638341  573596 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 19:01:38.638398  573596 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 19:01:38.638454  573596 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 19:01:38.638504  573596 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 19:01:38.638555  573596 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 19:01:38.638624  573596 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 19:01:38.711558  573596 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 19:01:38.711698  573596 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 19:01:38.711800  573596 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 19:01:38.731234  573596 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 19:01:38.736541  573596 out.go:235]   - Generating certificates and keys ...
	I0916 19:01:38.736728  573596 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 19:01:38.736830  573596 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 19:01:38.973170  573596 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 19:01:40.373891  573596 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 19:01:40.480454  573596 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 19:01:40.808657  573596 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 19:01:41.136215  573596 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 19:01:41.136348  573596 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-723934 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 19:01:41.758376  573596 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 19:01:41.758651  573596 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-723934 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 19:01:42.150650  573596 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 19:01:43.237210  573596 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 19:01:44.873519  573596 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 19:01:44.873770  573596 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 19:01:45.442304  573596 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 19:01:45.698313  573596 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 19:01:46.274095  573596 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 19:01:46.543163  573596 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 19:01:47.346210  573596 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 19:01:47.346865  573596 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 19:01:47.352109  573596 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 19:01:47.354591  573596 out.go:235]   - Booting up control plane ...
	I0916 19:01:47.354698  573596 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 19:01:47.354778  573596 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 19:01:47.355434  573596 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 19:01:47.366533  573596 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 19:01:47.373414  573596 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 19:01:47.373716  573596 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 19:01:47.497332  573596 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 19:01:47.497456  573596 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 19:01:48.999118  573596 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501872312s
	I0916 19:01:48.999207  573596 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 19:01:56.503166  573596 kubeadm.go:310] [api-check] The API server is healthy after 7.50214042s
	I0916 19:01:56.523736  573596 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 19:01:56.545998  573596 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 19:01:56.580778  573596 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 19:01:56.580978  573596 kubeadm.go:310] [mark-control-plane] Marking the node addons-723934 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 19:01:56.596787  573596 kubeadm.go:310] [bootstrap-token] Using token: 4fxgi0.wcjsmdo2f97ksvpn
	I0916 19:01:56.599658  573596 out.go:235]   - Configuring RBAC rules ...
	I0916 19:01:56.599791  573596 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 19:01:56.606729  573596 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 19:01:56.619640  573596 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 19:01:56.624041  573596 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 19:01:56.628625  573596 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 19:01:56.632987  573596 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 19:01:56.909237  573596 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 19:01:57.336583  573596 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 19:01:57.907662  573596 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 19:01:57.908745  573596 kubeadm.go:310] 
	I0916 19:01:57.908826  573596 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 19:01:57.908841  573596 kubeadm.go:310] 
	I0916 19:01:57.908929  573596 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 19:01:57.908938  573596 kubeadm.go:310] 
	I0916 19:01:57.908963  573596 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 19:01:57.909028  573596 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 19:01:57.909094  573596 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 19:01:57.909105  573596 kubeadm.go:310] 
	I0916 19:01:57.909158  573596 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 19:01:57.909169  573596 kubeadm.go:310] 
	I0916 19:01:57.909216  573596 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 19:01:57.909225  573596 kubeadm.go:310] 
	I0916 19:01:57.909277  573596 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 19:01:57.909356  573596 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 19:01:57.909428  573596 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 19:01:57.909436  573596 kubeadm.go:310] 
	I0916 19:01:57.909522  573596 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 19:01:57.909602  573596 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 19:01:57.909610  573596 kubeadm.go:310] 
	I0916 19:01:57.909694  573596 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4fxgi0.wcjsmdo2f97ksvpn \
	I0916 19:01:57.909798  573596 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:24d5c90e2e0578510c5e66434d865c7f1d6419be439a9ed3ef618b44fcf46ea1 \
	I0916 19:01:57.909823  573596 kubeadm.go:310] 	--control-plane 
	I0916 19:01:57.909833  573596 kubeadm.go:310] 
	I0916 19:01:57.909919  573596 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 19:01:57.909928  573596 kubeadm.go:310] 
	I0916 19:01:57.910013  573596 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4fxgi0.wcjsmdo2f97ksvpn \
	I0916 19:01:57.910120  573596 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:24d5c90e2e0578510c5e66434d865c7f1d6419be439a9ed3ef618b44fcf46ea1 
	I0916 19:01:57.914172  573596 kubeadm.go:310] W0916 19:01:38.609724    1820 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 19:01:57.914478  573596 kubeadm.go:310] W0916 19:01:38.610740    1820 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 19:01:57.914696  573596 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-aws\n", err: exit status 1
	I0916 19:01:57.914806  573596 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 19:01:57.914849  573596 cni.go:84] Creating CNI manager for ""
	I0916 19:01:57.914865  573596 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 19:01:57.919605  573596 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 19:01:57.922328  573596 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 19:01:57.933269  573596 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 19:01:57.955695  573596 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 19:01:57.955789  573596 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:01:57.955876  573596 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-723934 minikube.k8s.io/updated_at=2024_09_16T19_01_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=addons-723934 minikube.k8s.io/primary=true
	I0916 19:01:58.059237  573596 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:01:58.059303  573596 ops.go:34] apiserver oom_adj: -16
	I0916 19:01:58.559318  573596 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:01:59.060283  573596 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:01:59.559270  573596 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:02:00.061079  573596 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:02:00.559551  573596 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:02:01.059279  573596 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:02:01.559430  573596 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:02:01.653959  573596 kubeadm.go:1113] duration metric: took 3.698231442s to wait for elevateKubeSystemPrivileges
	I0916 19:02:01.653995  573596 kubeadm.go:394] duration metric: took 23.220819466s to StartCluster
	I0916 19:02:01.654014  573596 settings.go:142] acquiring lock: {Name:mk653b628471cb6d9d8c45abca24778e3ae081fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:02:01.654156  573596 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19649-567461/kubeconfig
	I0916 19:02:01.654559  573596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-567461/kubeconfig: {Name:mk6009131d24c51ff5da1b66280eb3942ce849f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:02:01.654778  573596 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 19:02:01.654920  573596 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 19:02:01.655206  573596 config.go:182] Loaded profile config "addons-723934": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 19:02:01.655251  573596 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 19:02:01.655331  573596 addons.go:69] Setting yakd=true in profile "addons-723934"
	I0916 19:02:01.655346  573596 addons.go:234] Setting addon yakd=true in "addons-723934"
	I0916 19:02:01.655371  573596 host.go:66] Checking if "addons-723934" exists ...
	I0916 19:02:01.655870  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:01.656486  573596 addons.go:69] Setting inspektor-gadget=true in profile "addons-723934"
	I0916 19:02:01.656511  573596 addons.go:234] Setting addon inspektor-gadget=true in "addons-723934"
	I0916 19:02:01.656540  573596 host.go:66] Checking if "addons-723934" exists ...
	I0916 19:02:01.656975  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:01.658641  573596 addons.go:69] Setting metrics-server=true in profile "addons-723934"
	I0916 19:02:01.658749  573596 addons.go:234] Setting addon metrics-server=true in "addons-723934"
	I0916 19:02:01.658808  573596 host.go:66] Checking if "addons-723934" exists ...
	I0916 19:02:01.659417  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:01.659907  573596 addons.go:69] Setting cloud-spanner=true in profile "addons-723934"
	I0916 19:02:01.659941  573596 addons.go:234] Setting addon cloud-spanner=true in "addons-723934"
	I0916 19:02:01.659978  573596 host.go:66] Checking if "addons-723934" exists ...
	I0916 19:02:01.660448  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:01.660940  573596 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-723934"
	I0916 19:02:01.660997  573596 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-723934"
	I0916 19:02:01.661026  573596 host.go:66] Checking if "addons-723934" exists ...
	I0916 19:02:01.661457  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:01.665095  573596 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-723934"
	I0916 19:02:01.665140  573596 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-723934"
	I0916 19:02:01.665180  573596 host.go:66] Checking if "addons-723934" exists ...
	I0916 19:02:01.665653  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:01.671514  573596 addons.go:69] Setting registry=true in profile "addons-723934"
	I0916 19:02:01.671559  573596 addons.go:234] Setting addon registry=true in "addons-723934"
	I0916 19:02:01.671608  573596 host.go:66] Checking if "addons-723934" exists ...
	I0916 19:02:01.672097  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:01.676050  573596 addons.go:69] Setting default-storageclass=true in profile "addons-723934"
	I0916 19:02:01.676146  573596 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-723934"
	I0916 19:02:01.676571  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:01.686519  573596 addons.go:69] Setting storage-provisioner=true in profile "addons-723934"
	I0916 19:02:01.686572  573596 addons.go:234] Setting addon storage-provisioner=true in "addons-723934"
	I0916 19:02:01.686614  573596 host.go:66] Checking if "addons-723934" exists ...
	I0916 19:02:01.687220  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:01.695729  573596 addons.go:69] Setting gcp-auth=true in profile "addons-723934"
	I0916 19:02:01.695766  573596 mustload.go:65] Loading cluster: addons-723934
	I0916 19:02:01.696098  573596 config.go:182] Loaded profile config "addons-723934": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 19:02:01.696369  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:01.697105  573596 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-723934"
	I0916 19:02:01.697130  573596 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-723934"
	I0916 19:02:01.697475  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:01.712195  573596 addons.go:69] Setting ingress=true in profile "addons-723934"
	I0916 19:02:01.712292  573596 addons.go:234] Setting addon ingress=true in "addons-723934"
	I0916 19:02:01.712475  573596 host.go:66] Checking if "addons-723934" exists ...
	I0916 19:02:01.713090  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:01.715371  573596 addons.go:69] Setting volcano=true in profile "addons-723934"
	I0916 19:02:01.715409  573596 addons.go:234] Setting addon volcano=true in "addons-723934"
	I0916 19:02:01.715447  573596 host.go:66] Checking if "addons-723934" exists ...
	I0916 19:02:01.715952  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:01.726735  573596 addons.go:69] Setting ingress-dns=true in profile "addons-723934"
	I0916 19:02:01.726779  573596 addons.go:234] Setting addon ingress-dns=true in "addons-723934"
	I0916 19:02:01.726893  573596 host.go:66] Checking if "addons-723934" exists ...
	I0916 19:02:01.727397  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:01.732096  573596 addons.go:69] Setting volumesnapshots=true in profile "addons-723934"
	I0916 19:02:01.732618  573596 addons.go:234] Setting addon volumesnapshots=true in "addons-723934"
	I0916 19:02:01.732673  573596 host.go:66] Checking if "addons-723934" exists ...
	I0916 19:02:01.733206  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:01.741756  573596 out.go:177] * Verifying Kubernetes components...
	I0916 19:02:01.783485  573596 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 19:02:01.794426  573596 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 19:02:01.794469  573596 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 19:02:01.794544  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:02:01.811740  573596 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 19:02:01.814276  573596 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 19:02:01.815188  573596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:02:01.819141  573596 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 19:02:01.821016  573596 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 19:02:01.824430  573596 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 19:02:01.824739  573596 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 19:02:01.824775  573596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 19:02:01.824906  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:02:01.893374  573596 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 19:02:01.898941  573596 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 19:02:01.899557  573596 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 19:02:01.910401  573596 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 19:02:01.910956  573596 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 19:02:01.912701  573596 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 19:02:01.912720  573596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 19:02:01.911801  573596 addons.go:234] Setting addon default-storageclass=true in "addons-723934"
	I0916 19:02:01.912785  573596 host.go:66] Checking if "addons-723934" exists ...
	I0916 19:02:01.913273  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:01.913492  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:02:01.930706  573596 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 19:02:01.930737  573596 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 19:02:01.930804  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:02:01.940913  573596 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 19:02:01.946310  573596 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 19:02:01.946382  573596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 19:02:01.946483  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:02:01.951977  573596 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 19:02:01.955047  573596 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 19:02:01.957832  573596 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 19:02:01.958739  573596 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 19:02:01.958769  573596 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 19:02:01.958865  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:02:01.960720  573596 host.go:66] Checking if "addons-723934" exists ...
	I0916 19:02:01.963415  573596 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-723934"
	I0916 19:02:01.963457  573596 host.go:66] Checking if "addons-723934" exists ...
	I0916 19:02:01.963888  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:01.977326  573596 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 19:02:01.977519  573596 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 19:02:01.983954  573596 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 19:02:01.983983  573596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 19:02:01.984054  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:02:01.984262  573596 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 19:02:01.987314  573596 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 19:02:01.992405  573596 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 19:02:01.992440  573596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 19:02:01.992512  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:02:02.011355  573596 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 19:02:02.016266  573596 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 19:02:02.020676  573596 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 19:02:02.020813  573596 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 19:02:02.020842  573596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 19:02:02.020912  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:02:02.023220  573596 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 19:02:02.023267  573596 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 19:02:02.023349  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:02:02.052544  573596 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 19:02:02.052770  573596 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 19:02:02.054988  573596 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 19:02:02.055069  573596 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 19:02:02.055378  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:02:02.055691  573596 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 19:02:02.055727  573596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 19:02:02.055788  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:02:02.084625  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:02:02.113453  573596 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 19:02:02.113477  573596 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 19:02:02.113542  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:02:02.119269  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:02:02.128350  573596 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 19:02:02.141023  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:02:02.146913  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:02:02.194097  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:02:02.195348  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:02:02.200310  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:02:02.240060  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:02:02.242350  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:02:02.244921  573596 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 19:02:02.247580  573596 out.go:177]   - Using image docker.io/busybox:stable
	I0916 19:02:02.251008  573596 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 19:02:02.251033  573596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 19:02:02.251105  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:02:02.265668  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:02:02.273977  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:02:02.276849  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	W0916 19:02:02.283194  573596 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 19:02:02.283226  573596 retry.go:31] will retry after 131.381821ms: ssh: handshake failed: EOF
	I0916 19:02:02.296342  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:02:02.303797  573596 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 19:02:02.323516  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	W0916 19:02:02.324431  573596 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 19:02:02.324468  573596 retry.go:31] will retry after 175.440776ms: ssh: handshake failed: EOF
	W0916 19:02:02.416982  573596 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 19:02:02.417012  573596 retry.go:31] will retry after 558.995428ms: ssh: handshake failed: EOF
	W0916 19:02:02.501115  573596 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 19:02:02.501193  573596 retry.go:31] will retry after 206.141776ms: ssh: handshake failed: EOF
	I0916 19:02:02.601610  573596 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 19:02:02.601636  573596 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 19:02:02.639847  573596 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 19:02:02.639923  573596 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 19:02:02.795383  573596 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 19:02:02.795463  573596 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 19:02:02.907727  573596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 19:02:02.949822  573596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 19:02:02.985808  573596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 19:02:03.000947  573596 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 19:02:03.000975  573596 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 19:02:03.100328  573596 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 19:02:03.100358  573596 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 19:02:03.139055  573596 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 19:02:03.139081  573596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 19:02:03.144893  573596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 19:02:03.147372  573596 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 19:02:03.147397  573596 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 19:02:03.247613  573596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 19:02:03.391321  573596 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 19:02:03.391349  573596 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 19:02:03.411198  573596 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 19:02:03.411230  573596 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 19:02:03.440572  573596 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 19:02:03.440600  573596 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 19:02:03.498236  573596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 19:02:03.531781  573596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 19:02:03.638958  573596 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 19:02:03.638983  573596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 19:02:03.665294  573596 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 19:02:03.665324  573596 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 19:02:03.680403  573596 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 19:02:03.680431  573596 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 19:02:03.752947  573596 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 19:02:03.752973  573596 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 19:02:03.765103  573596 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 19:02:03.765130  573596 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 19:02:03.867051  573596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 19:02:03.895778  573596 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 19:02:03.895829  573596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 19:02:04.133579  573596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 19:02:04.176133  573596 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 19:02:04.176163  573596 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 19:02:04.267277  573596 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 19:02:04.267305  573596 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 19:02:04.297116  573596 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 19:02:04.297143  573596 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 19:02:04.311499  573596 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 19:02:04.311534  573596 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 19:02:04.315299  573596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 19:02:04.375208  573596 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.246763309s)
	I0916 19:02:04.375279  573596 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 19:02:04.376379  573596 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.072557343s)
	I0916 19:02:04.377226  573596 node_ready.go:35] waiting up to 6m0s for node "addons-723934" to be "Ready" ...
	I0916 19:02:04.381642  573596 node_ready.go:49] node "addons-723934" has status "Ready":"True"
	I0916 19:02:04.381715  573596 node_ready.go:38] duration metric: took 4.31818ms for node "addons-723934" to be "Ready" ...
	I0916 19:02:04.381741  573596 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 19:02:04.394049  573596 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace to be "Ready" ...
	I0916 19:02:04.534364  573596 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 19:02:04.534448  573596 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 19:02:04.536573  573596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 19:02:04.567495  573596 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 19:02:04.567593  573596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 19:02:04.634970  573596 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 19:02:04.635072  573596 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 19:02:04.683858  573596 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 19:02:04.683962  573596 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 19:02:04.879898  573596 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-723934" context rescaled to 1 replicas
	I0916 19:02:04.909546  573596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 19:02:05.074634  573596 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 19:02:05.074715  573596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 19:02:05.083631  573596 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 19:02:05.083724  573596 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 19:02:05.201965  573596 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 19:02:05.202044  573596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 19:02:05.381177  573596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 19:02:05.517021  573596 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 19:02:05.517251  573596 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 19:02:05.856669  573596 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 19:02:05.856742  573596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 19:02:06.386014  573596 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 19:02:06.386040  573596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 19:02:06.407199  573596 pod_ready.go:103] pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace has status "Ready":"False"
	I0916 19:02:06.796853  573596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.889042398s)
	I0916 19:02:06.988304  573596 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 19:02:06.988332  573596 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 19:02:07.515014  573596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 19:02:08.409500  573596 pod_ready.go:103] pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace has status "Ready":"False"
	I0916 19:02:08.973565  573596 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 19:02:08.973655  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:02:09.001896  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:02:10.452489  573596 pod_ready.go:103] pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace has status "Ready":"False"
	I0916 19:02:10.614652  573596 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 19:02:11.073945  573596 addons.go:234] Setting addon gcp-auth=true in "addons-723934"
	I0916 19:02:11.074007  573596 host.go:66] Checking if "addons-723934" exists ...
	I0916 19:02:11.074522  573596 cli_runner.go:164] Run: docker container inspect addons-723934 --format={{.State.Status}}
	I0916 19:02:11.096570  573596 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 19:02:11.096624  573596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-723934
	I0916 19:02:11.123601  573596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33499 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/addons-723934/id_rsa Username:docker}
	I0916 19:02:12.902036  573596 pod_ready.go:103] pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace has status "Ready":"False"
	I0916 19:02:14.954849  573596 pod_ready.go:103] pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace has status "Ready":"False"
	I0916 19:02:15.539946  573596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.590023796s)
	I0916 19:02:15.540027  573596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.554200963s)
	I0916 19:02:15.540054  573596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (12.395143858s)
	I0916 19:02:15.540108  573596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (12.292475474s)
	I0916 19:02:15.540195  573596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.041936254s)
	I0916 19:02:15.540210  573596 addons.go:475] Verifying addon ingress=true in "addons-723934"
	I0916 19:02:15.540493  573596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (12.008683776s)
	I0916 19:02:15.540671  573596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.673586746s)
	I0916 19:02:15.540725  573596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.407114542s)
	I0916 19:02:15.540831  573596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.225505092s)
	I0916 19:02:15.540845  573596 addons.go:475] Verifying addon registry=true in "addons-723934"
	I0916 19:02:15.541068  573596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.004405684s)
	I0916 19:02:15.541397  573596 addons.go:475] Verifying addon metrics-server=true in "addons-723934"
	I0916 19:02:15.541145  573596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.631392428s)
	W0916 19:02:15.541423  573596 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 19:02:15.541446  573596 retry.go:31] will retry after 290.16024ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 19:02:15.541244  573596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (10.159966734s)
	I0916 19:02:15.544197  573596 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-723934 service yakd-dashboard -n yakd-dashboard
	
	I0916 19:02:15.544194  573596 out.go:177] * Verifying registry addon...
	I0916 19:02:15.544344  573596 out.go:177] * Verifying ingress addon...
	I0916 19:02:15.548950  573596 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 19:02:15.549188  573596 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 19:02:15.599033  573596 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 19:02:15.599058  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:15.600347  573596 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 19:02:15.600375  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 19:02:15.621126  573596 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0916 19:02:15.832279  573596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 19:02:16.058045  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:16.059033  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:16.218664  573596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.703586419s)
	I0916 19:02:16.218703  573596 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-723934"
	I0916 19:02:16.218970  573596 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.12237729s)
	I0916 19:02:16.221820  573596 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 19:02:16.221894  573596 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 19:02:16.225411  573596 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 19:02:16.228379  573596 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 19:02:16.231149  573596 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 19:02:16.231178  573596 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 19:02:16.241977  573596 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 19:02:16.242001  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:16.432058  573596 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 19:02:16.432086  573596 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 19:02:16.504492  573596 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 19:02:16.504517  573596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 19:02:16.556698  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:16.558681  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:16.586082  573596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 19:02:16.730571  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:17.054429  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:17.055916  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:17.235995  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:17.400704  573596 pod_ready.go:103] pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace has status "Ready":"False"
	I0916 19:02:17.554257  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:17.555006  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:17.731543  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:18.055433  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:18.056585  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:18.182472  573596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.350142982s)
	I0916 19:02:18.203283  573596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.617157077s)
	I0916 19:02:18.206488  573596 addons.go:475] Verifying addon gcp-auth=true in "addons-723934"
	I0916 19:02:18.209257  573596 out.go:177] * Verifying gcp-auth addon...
	I0916 19:02:18.213416  573596 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 19:02:18.222932  573596 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 19:02:18.325861  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:18.555575  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:18.557059  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:18.730476  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:19.054479  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:19.055776  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:19.231083  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:19.400834  573596 pod_ready.go:103] pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace has status "Ready":"False"
	I0916 19:02:19.555040  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:19.555970  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:19.730080  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:20.055491  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:20.056785  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:20.230855  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:20.555366  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:20.555779  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:20.730652  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:21.054630  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:21.056130  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:21.230785  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:21.554964  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:21.555931  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:21.732002  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:21.900627  573596 pod_ready.go:103] pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace has status "Ready":"False"
	I0916 19:02:22.055637  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:22.056108  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:22.231312  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:22.555793  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:22.557556  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:22.731412  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:23.054115  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:23.054323  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:23.230766  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:23.554959  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:23.556213  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:23.731160  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:23.900886  573596 pod_ready.go:103] pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace has status "Ready":"False"
	I0916 19:02:24.053498  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:24.053995  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:24.230013  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:24.562158  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:24.564505  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:24.730980  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:25.055234  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:25.055994  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:25.334297  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:25.555984  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:25.557554  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:25.731048  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:25.901370  573596 pod_ready.go:103] pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace has status "Ready":"False"
	I0916 19:02:26.054195  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:26.054759  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:26.231234  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:26.553544  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:26.554706  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:26.734291  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:27.056063  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:27.057097  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:27.230188  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:27.555615  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:27.557013  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:27.730288  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:28.055485  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:28.056358  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:28.231407  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:28.400687  573596 pod_ready.go:103] pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace has status "Ready":"False"
	I0916 19:02:28.555627  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:28.556436  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:28.730845  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:29.053495  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:29.054652  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:29.229764  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:29.553816  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:29.555836  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:29.730160  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:30.110159  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:30.111536  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:30.232464  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:30.402709  573596 pod_ready.go:103] pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace has status "Ready":"False"
	I0916 19:02:30.564812  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:30.565777  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:30.736879  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:31.054719  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:31.055975  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:31.231777  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:31.575353  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:31.576002  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:31.736926  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:32.054322  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:32.055220  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:32.230981  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:32.557044  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:32.560137  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:32.731120  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:32.900460  573596 pod_ready.go:103] pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace has status "Ready":"False"
	I0916 19:02:33.054352  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:33.056360  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:33.232304  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:33.555335  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:33.555590  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:33.731477  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:34.060838  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:34.062496  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:34.232103  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:34.560647  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:34.568550  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:34.731429  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:34.904494  573596 pod_ready.go:103] pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace has status "Ready":"False"
	I0916 19:02:35.053911  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:35.054968  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:35.231246  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:35.556177  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:35.557138  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:35.730806  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:36.053813  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:36.054459  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:36.231946  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:36.554342  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:36.554742  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:36.730325  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:37.055225  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:37.055993  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:37.231198  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:37.400805  573596 pod_ready.go:103] pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace has status "Ready":"False"
	I0916 19:02:37.555591  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:37.556831  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:37.730455  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:38.056870  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:38.059085  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:38.231282  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:38.556051  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 19:02:38.556380  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:38.731759  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:38.903213  573596 pod_ready.go:93] pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace has status "Ready":"True"
	I0916 19:02:38.903241  573596 pod_ready.go:82] duration metric: took 34.509099261s for pod "coredns-7c65d6cfc9-2xqtg" in "kube-system" namespace to be "Ready" ...
	I0916 19:02:38.903253  573596 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jmgd7" in "kube-system" namespace to be "Ready" ...
	I0916 19:02:38.906320  573596 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-jmgd7" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-jmgd7" not found
	I0916 19:02:38.906347  573596 pod_ready.go:82] duration metric: took 3.085986ms for pod "coredns-7c65d6cfc9-jmgd7" in "kube-system" namespace to be "Ready" ...
	E0916 19:02:38.906359  573596 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-jmgd7" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-jmgd7" not found
	I0916 19:02:38.906368  573596 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-723934" in "kube-system" namespace to be "Ready" ...
	I0916 19:02:38.916043  573596 pod_ready.go:93] pod "etcd-addons-723934" in "kube-system" namespace has status "Ready":"True"
	I0916 19:02:38.916078  573596 pod_ready.go:82] duration metric: took 9.697587ms for pod "etcd-addons-723934" in "kube-system" namespace to be "Ready" ...
	I0916 19:02:38.916095  573596 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-723934" in "kube-system" namespace to be "Ready" ...
	I0916 19:02:38.925439  573596 pod_ready.go:93] pod "kube-apiserver-addons-723934" in "kube-system" namespace has status "Ready":"True"
	I0916 19:02:38.925466  573596 pod_ready.go:82] duration metric: took 9.361939ms for pod "kube-apiserver-addons-723934" in "kube-system" namespace to be "Ready" ...
	I0916 19:02:38.925478  573596 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-723934" in "kube-system" namespace to be "Ready" ...
	I0916 19:02:38.934961  573596 pod_ready.go:93] pod "kube-controller-manager-addons-723934" in "kube-system" namespace has status "Ready":"True"
	I0916 19:02:38.934996  573596 pod_ready.go:82] duration metric: took 9.502662ms for pod "kube-controller-manager-addons-723934" in "kube-system" namespace to be "Ready" ...
	I0916 19:02:38.935023  573596 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-87fbz" in "kube-system" namespace to be "Ready" ...
	I0916 19:02:39.055564  573596 kapi.go:107] duration metric: took 23.506601917s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 19:02:39.056973  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:39.098535  573596 pod_ready.go:93] pod "kube-proxy-87fbz" in "kube-system" namespace has status "Ready":"True"
	I0916 19:02:39.098644  573596 pod_ready.go:82] duration metric: took 163.609315ms for pod "kube-proxy-87fbz" in "kube-system" namespace to be "Ready" ...
	I0916 19:02:39.098674  573596 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-723934" in "kube-system" namespace to be "Ready" ...
	I0916 19:02:39.234368  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:39.515326  573596 pod_ready.go:93] pod "kube-scheduler-addons-723934" in "kube-system" namespace has status "Ready":"True"
	I0916 19:02:39.515395  573596 pod_ready.go:82] duration metric: took 416.698095ms for pod "kube-scheduler-addons-723934" in "kube-system" namespace to be "Ready" ...
	I0916 19:02:39.515420  573596 pod_ready.go:39] duration metric: took 35.133651821s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 19:02:39.515467  573596 api_server.go:52] waiting for apiserver process to appear ...
	I0916 19:02:39.515567  573596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 19:02:39.538150  573596 api_server.go:72] duration metric: took 37.883316661s to wait for apiserver process to appear ...
	I0916 19:02:39.538225  573596 api_server.go:88] waiting for apiserver healthz status ...
	I0916 19:02:39.538258  573596 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 19:02:39.546454  573596 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 19:02:39.547658  573596 api_server.go:141] control plane version: v1.31.1
	I0916 19:02:39.547693  573596 api_server.go:131] duration metric: took 9.44846ms to wait for apiserver health ...
	I0916 19:02:39.547701  573596 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 19:02:39.554084  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:39.706702  573596 system_pods.go:59] 17 kube-system pods found
	I0916 19:02:39.706784  573596 system_pods.go:61] "coredns-7c65d6cfc9-2xqtg" [4bd56b45-b695-4594-804e-3b7f138733b2] Running
	I0916 19:02:39.706810  573596 system_pods.go:61] "csi-hostpath-attacher-0" [cec58aa6-2dea-4621-949c-1115f8aa049d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 19:02:39.706856  573596 system_pods.go:61] "csi-hostpath-resizer-0" [d8aba52a-21e6-4ba2-9557-4e9ef7098838] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 19:02:39.706878  573596 system_pods.go:61] "csi-hostpathplugin-dmkh6" [66f95b4c-1ce6-4f5e-9c88-cb2e4c5b2507] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 19:02:39.706902  573596 system_pods.go:61] "etcd-addons-723934" [00948c3c-f5c0-4d18-8797-a392361d843d] Running
	I0916 19:02:39.706937  573596 system_pods.go:61] "kube-apiserver-addons-723934" [124e549c-77e6-4380-8eb7-251d85441ce9] Running
	I0916 19:02:39.706965  573596 system_pods.go:61] "kube-controller-manager-addons-723934" [806bc022-185e-428d-89bd-f89d9121b3a6] Running
	I0916 19:02:39.706987  573596 system_pods.go:61] "kube-ingress-dns-minikube" [0d82255b-3500-4394-b7a5-2be1b761b530] Running
	I0916 19:02:39.707008  573596 system_pods.go:61] "kube-proxy-87fbz" [59d385cc-bb87-4302-b1f3-21a6c9a68a2a] Running
	I0916 19:02:39.707043  573596 system_pods.go:61] "kube-scheduler-addons-723934" [ee581bf2-fe0c-48ea-b7f3-10a9f1b76092] Running
	I0916 19:02:39.707068  573596 system_pods.go:61] "metrics-server-84c5f94fbc-z5ddv" [cda63f7f-b04a-4a4b-ab51-d1ef4c36f179] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 19:02:39.707089  573596 system_pods.go:61] "nvidia-device-plugin-daemonset-zjpr7" [7b3f3ae7-3136-4b99-96ae-17db09d0cc53] Running
	I0916 19:02:39.707111  573596 system_pods.go:61] "registry-66c9cd494c-bbjqg" [abab3938-5ca1-4f67-bec8-0f5518fa637b] Running
	I0916 19:02:39.707144  573596 system_pods.go:61] "registry-proxy-5lk22" [095802f7-441a-4970-b3a6-8d88eab7bb43] Running
	I0916 19:02:39.707172  573596 system_pods.go:61] "snapshot-controller-56fcc65765-blgk4" [0de8c637-1e9c-412a-9ac5-a2f1562b534f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 19:02:39.707193  573596 system_pods.go:61] "snapshot-controller-56fcc65765-lzsgw" [6196de19-fd41-4a14-b371-a644958a5fab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 19:02:39.707214  573596 system_pods.go:61] "storage-provisioner" [30e352ff-67a8-425a-bacf-869009154106] Running
	I0916 19:02:39.707249  573596 system_pods.go:74] duration metric: took 159.540525ms to wait for pod list to return data ...
	I0916 19:02:39.707278  573596 default_sa.go:34] waiting for default service account to be created ...
	I0916 19:02:39.730676  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:39.898695  573596 default_sa.go:45] found service account: "default"
	I0916 19:02:39.898793  573596 default_sa.go:55] duration metric: took 191.490903ms for default service account to be created ...
	I0916 19:02:39.898861  573596 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 19:02:40.062428  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:40.108765  573596 system_pods.go:86] 17 kube-system pods found
	I0916 19:02:40.108851  573596 system_pods.go:89] "coredns-7c65d6cfc9-2xqtg" [4bd56b45-b695-4594-804e-3b7f138733b2] Running
	I0916 19:02:40.108878  573596 system_pods.go:89] "csi-hostpath-attacher-0" [cec58aa6-2dea-4621-949c-1115f8aa049d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 19:02:40.108921  573596 system_pods.go:89] "csi-hostpath-resizer-0" [d8aba52a-21e6-4ba2-9557-4e9ef7098838] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 19:02:40.108952  573596 system_pods.go:89] "csi-hostpathplugin-dmkh6" [66f95b4c-1ce6-4f5e-9c88-cb2e4c5b2507] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 19:02:40.108976  573596 system_pods.go:89] "etcd-addons-723934" [00948c3c-f5c0-4d18-8797-a392361d843d] Running
	I0916 19:02:40.109000  573596 system_pods.go:89] "kube-apiserver-addons-723934" [124e549c-77e6-4380-8eb7-251d85441ce9] Running
	I0916 19:02:40.109036  573596 system_pods.go:89] "kube-controller-manager-addons-723934" [806bc022-185e-428d-89bd-f89d9121b3a6] Running
	I0916 19:02:40.109069  573596 system_pods.go:89] "kube-ingress-dns-minikube" [0d82255b-3500-4394-b7a5-2be1b761b530] Running
	I0916 19:02:40.109091  573596 system_pods.go:89] "kube-proxy-87fbz" [59d385cc-bb87-4302-b1f3-21a6c9a68a2a] Running
	I0916 19:02:40.109113  573596 system_pods.go:89] "kube-scheduler-addons-723934" [ee581bf2-fe0c-48ea-b7f3-10a9f1b76092] Running
	I0916 19:02:40.109149  573596 system_pods.go:89] "metrics-server-84c5f94fbc-z5ddv" [cda63f7f-b04a-4a4b-ab51-d1ef4c36f179] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 19:02:40.109178  573596 system_pods.go:89] "nvidia-device-plugin-daemonset-zjpr7" [7b3f3ae7-3136-4b99-96ae-17db09d0cc53] Running
	I0916 19:02:40.109200  573596 system_pods.go:89] "registry-66c9cd494c-bbjqg" [abab3938-5ca1-4f67-bec8-0f5518fa637b] Running
	I0916 19:02:40.109223  573596 system_pods.go:89] "registry-proxy-5lk22" [095802f7-441a-4970-b3a6-8d88eab7bb43] Running
	I0916 19:02:40.109259  573596 system_pods.go:89] "snapshot-controller-56fcc65765-blgk4" [0de8c637-1e9c-412a-9ac5-a2f1562b534f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 19:02:40.109291  573596 system_pods.go:89] "snapshot-controller-56fcc65765-lzsgw" [6196de19-fd41-4a14-b371-a644958a5fab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 19:02:40.109312  573596 system_pods.go:89] "storage-provisioner" [30e352ff-67a8-425a-bacf-869009154106] Running
	I0916 19:02:40.109338  573596 system_pods.go:126] duration metric: took 210.453989ms to wait for k8s-apps to be running ...
	I0916 19:02:40.109369  573596 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 19:02:40.109455  573596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 19:02:40.125464  573596 system_svc.go:56] duration metric: took 16.085275ms WaitForService to wait for kubelet
	I0916 19:02:40.125496  573596 kubeadm.go:582] duration metric: took 38.470680085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 19:02:40.125517  573596 node_conditions.go:102] verifying NodePressure condition ...
	I0916 19:02:40.230994  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:40.298866  573596 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0916 19:02:40.298953  573596 node_conditions.go:123] node cpu capacity is 2
	I0916 19:02:40.298982  573596 node_conditions.go:105] duration metric: took 173.456766ms to run NodePressure ...
	I0916 19:02:40.299009  573596 start.go:241] waiting for startup goroutines ...
	I0916 19:02:40.554396  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:40.730530  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:41.054384  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:41.234654  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:41.555100  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:41.730624  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:42.055052  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:42.245143  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:42.563139  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:42.730793  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:43.057621  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:43.231200  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:43.555286  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:43.730327  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:44.054268  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:44.231262  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:44.555448  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:44.730204  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:45.057377  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:45.237140  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:45.554288  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:45.730374  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:46.054448  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:46.230878  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:46.555452  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:46.730422  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:47.054733  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:47.232406  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:47.555698  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:47.733963  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:48.056765  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:48.233162  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:48.554956  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:48.730583  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:49.056965  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:49.231969  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:49.564735  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:49.731341  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:50.056193  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:50.231863  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:50.554622  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:50.732150  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:51.055432  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:51.231057  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:51.554553  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:51.730561  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:52.054457  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:52.231836  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:52.554647  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:52.730935  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:53.054594  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:53.230956  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:53.554863  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:53.732735  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:54.056413  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:54.231276  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:54.563466  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:54.729893  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:55.054222  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:55.231333  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:55.555065  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:55.731750  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:56.056008  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:56.230781  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:56.554779  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:56.733825  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:57.054035  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:57.230411  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:57.555816  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:57.730470  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:58.053739  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:58.231191  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:58.554045  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:58.729814  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:59.054582  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:59.229868  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:02:59.553996  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:02:59.731507  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:00.055507  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:00.237469  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:00.554696  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:00.730580  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:01.054552  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:01.230346  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:01.555073  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:01.731689  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:02.053345  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:02.230943  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:02.554713  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:02.787261  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:03.058255  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:03.232434  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:03.555566  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:03.751900  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:04.056760  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:04.232001  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:04.554250  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:04.731013  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:05.054388  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:05.230993  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:05.554227  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:05.819305  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:06.057348  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:06.231034  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:06.555383  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:06.730809  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:07.055775  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:07.230944  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:07.554237  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:07.729976  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:08.054286  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:08.230637  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:08.554048  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:08.730327  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:09.055997  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:09.230215  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:09.554012  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:09.730462  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:10.057213  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:10.230089  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:10.555491  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:10.730619  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:11.053959  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:11.232199  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 19:03:11.554089  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:11.730090  573596 kapi.go:107] duration metric: took 55.504677227s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 19:03:12.054401  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:12.553583  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:13.054460  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:13.553446  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:14.054484  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:14.553691  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:15.054631  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:15.553211  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:16.053654  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:16.554421  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:17.054642  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:17.553555  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:18.068251  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:18.554211  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:19.054867  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:19.556610  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:20.054723  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:20.554266  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:21.054526  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:21.553789  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:22.061243  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:22.571804  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:23.059533  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:23.554010  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:24.054426  573596 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 19:03:24.610784  573596 kapi.go:107] duration metric: took 1m9.0615926s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 19:03:40.229595  573596 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 19:03:40.229624  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:40.718266  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:41.216924  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:41.717300  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:42.218895  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:42.717814  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:43.217554  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:43.718034  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:44.217358  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:44.717365  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:45.229260  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:45.717027  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:46.217203  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:46.718659  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:47.218240  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:47.716971  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:48.216888  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:48.717612  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:49.217409  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:49.717156  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:50.218333  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:50.716623  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:51.218102  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:51.716657  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:52.218037  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:52.738695  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:53.218170  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:53.716840  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:54.218357  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:54.720616  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:55.218553  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:55.716914  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:56.218442  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:56.717234  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:57.218378  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:57.716602  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:58.219834  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:58.717578  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:59.217731  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:03:59.718340  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:00.222177  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:00.718423  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:01.216622  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:01.717480  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:02.217884  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:02.717685  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:03.216974  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:03.716881  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:04.219165  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:04.716967  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:05.217685  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:05.717070  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:06.217959  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:06.718117  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:07.217584  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:07.717747  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:08.218149  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:08.717270  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:09.216616  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:09.717539  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:10.218015  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:10.717546  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:11.217279  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:11.717874  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:12.218252  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:12.716802  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:13.218406  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:13.716673  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:14.217788  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:14.717533  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:15.218899  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:15.717512  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:16.216999  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:16.717628  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:17.217987  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:17.717368  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:18.217560  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:18.717658  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:19.217176  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:19.718113  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:20.217366  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:20.717765  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:21.220567  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:21.717311  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:22.218216  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:22.718612  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:23.217574  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:23.717078  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:24.217784  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:24.718180  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:25.217531  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:25.717402  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:26.217018  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:26.717192  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:27.217555  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:27.717774  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:28.218320  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:28.717805  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:29.217685  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:29.717859  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:30.218644  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:30.717614  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:31.217559  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:31.717121  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:32.217608  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:32.717304  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:33.216738  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:33.717093  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:34.217886  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:34.718056  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:35.218526  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:35.716863  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:36.218374  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:36.717176  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:37.218676  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:37.718922  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:38.218262  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:38.717118  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:39.217555  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:39.717335  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:40.217429  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:40.716721  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:41.217249  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:41.717511  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:42.218455  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:42.716706  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:43.216984  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:43.720794  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:44.218259  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:44.716932  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:45.219112  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:45.718242  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:46.217145  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:46.717358  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:47.217644  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:47.717553  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:48.217701  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:48.721834  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:49.218627  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:49.717432  573596 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 19:04:50.218523  573596 kapi.go:107] duration metric: took 2m32.005106487s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 19:04:50.221252  573596 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-723934 cluster.
	I0916 19:04:50.223902  573596 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 19:04:50.226438  573596 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 19:04:50.230050  573596 out.go:177] * Enabled addons: cloud-spanner, volcano, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0916 19:04:50.232887  573596 addons.go:510] duration metric: took 2m48.577623765s for enable addons: enabled=[cloud-spanner volcano storage-provisioner nvidia-device-plugin ingress-dns metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0916 19:04:50.232951  573596 start.go:246] waiting for cluster config update ...
	I0916 19:04:50.233000  573596 start.go:255] writing updated cluster config ...
	I0916 19:04:50.233325  573596 ssh_runner.go:195] Run: rm -f paused
	I0916 19:04:50.628358  573596 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0916 19:04:50.631048  573596 out.go:177] * Done! kubectl is now configured to use "addons-723934" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 16 19:14:16 addons-723934 dockerd[1283]: time="2024-09-16T19:14:16.926158025Z" level=info msg="ignoring event" container=562d1fc676b1f7de7ed94f6f3077e719b54e61848859a231affe26104c05ad7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 19:14:16 addons-723934 dockerd[1283]: time="2024-09-16T19:14:16.990010732Z" level=info msg="ignoring event" container=84bc2230b3bd5817018217f11087cbbc191594774d2144f536583cd004ad5dbb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 19:14:17 addons-723934 dockerd[1283]: time="2024-09-16T19:14:17.106363971Z" level=info msg="ignoring event" container=3d7cabe4af872c6bdda62561c9187e5ed6c019c60d767a8782558edac0cbb752 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 19:14:17 addons-723934 dockerd[1283]: time="2024-09-16T19:14:17.233873262Z" level=info msg="ignoring event" container=b94b3713dd5441d1a542ecf37d1e33c63fa9a6ee8fb04680c5bf70fbc66a5454 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 19:14:17 addons-723934 dockerd[1283]: time="2024-09-16T19:14:17.262312811Z" level=info msg="ignoring event" container=dc1c68cac977058759752673a9c03324e04c43bc6b9f15bae5eaaa8364c3f2b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 19:14:18 addons-723934 cri-dockerd[1542]: time="2024-09-16T19:14:18Z" level=error msg="error getting RW layer size for container ID '14c50327d6c0eecc9e7868caf95897d42135269c5c385b5e995c24e2c785d77f': Error response from daemon: No such container: 14c50327d6c0eecc9e7868caf95897d42135269c5c385b5e995c24e2c785d77f"
	Sep 16 19:14:18 addons-723934 cri-dockerd[1542]: time="2024-09-16T19:14:18Z" level=error msg="Set backoffDuration to : 1m0s for container ID '14c50327d6c0eecc9e7868caf95897d42135269c5c385b5e995c24e2c785d77f'"
	Sep 16 19:14:18 addons-723934 cri-dockerd[1542]: time="2024-09-16T19:14:18Z" level=error msg="error getting RW layer size for container ID '89463348adc1cc59a906fcb8fed31ca89ce191aa22f3534efc2643f382c78afa': Error response from daemon: No such container: 89463348adc1cc59a906fcb8fed31ca89ce191aa22f3534efc2643f382c78afa"
	Sep 16 19:14:18 addons-723934 cri-dockerd[1542]: time="2024-09-16T19:14:18Z" level=error msg="Set backoffDuration to : 1m0s for container ID '89463348adc1cc59a906fcb8fed31ca89ce191aa22f3534efc2643f382c78afa'"
	Sep 16 19:14:23 addons-723934 dockerd[1283]: time="2024-09-16T19:14:23.436420334Z" level=info msg="ignoring event" container=e4495c5b9d487b18170e41c314d1e44f5ab5fd2e0c2e319f7ede01b3424eae26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 19:14:23 addons-723934 dockerd[1283]: time="2024-09-16T19:14:23.442997586Z" level=info msg="ignoring event" container=d013e37d9dda43c026ca697faa1ed6769a87f96e0619516de69e1abc59c9951e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 19:14:23 addons-723934 dockerd[1283]: time="2024-09-16T19:14:23.651730343Z" level=info msg="ignoring event" container=45f5103eab162168a36b38b6125bd0f4e5b49d2a27109b5f563074f9dbc4dbf8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 19:14:23 addons-723934 dockerd[1283]: time="2024-09-16T19:14:23.676482470Z" level=info msg="ignoring event" container=2e4b30c3955786205dc15fd752666249ccf3cfe28be4ffeca52138815964e321 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 19:14:30 addons-723934 dockerd[1283]: time="2024-09-16T19:14:30.481096849Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 16 19:14:30 addons-723934 dockerd[1283]: time="2024-09-16T19:14:30.484847508Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 16 19:14:31 addons-723934 dockerd[1283]: time="2024-09-16T19:14:31.391176158Z" level=info msg="ignoring event" container=6d4e4ffb1785f817d4f8aea81aa557271cace52b54c8476cdf77e4ab849c4b20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 19:14:31 addons-723934 dockerd[1283]: time="2024-09-16T19:14:31.515735520Z" level=info msg="ignoring event" container=1da0cf4e56ec0c2b5bf2598892dd57738478727a610f7cfcbed1b511a76fb47b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 19:14:37 addons-723934 dockerd[1283]: time="2024-09-16T19:14:37.089928179Z" level=info msg="ignoring event" container=8736c8589df3975e55543f5f22d052eaed2bf2e0d9cf85019d35acc86b2118a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 19:14:43 addons-723934 cri-dockerd[1542]: time="2024-09-16T19:14:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4fe5bbbcfb64d2f25b435d091418c31f42c51ce8f66c80e353c0ca5299fc19f6/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 16 19:14:45 addons-723934 cri-dockerd[1542]: time="2024-09-16T19:14:45Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Sep 16 19:14:46 addons-723934 dockerd[1283]: time="2024-09-16T19:14:46.375907358Z" level=info msg="ignoring event" container=ccd6a3793f419c489d378287367d0ad1b2fef327af8f28dfa57f1ac4f6ec3de5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 19:14:46 addons-723934 dockerd[1283]: time="2024-09-16T19:14:46.991565020Z" level=info msg="ignoring event" container=f16214a0464efd9d830bad1bde27c83649b325ee2ca4d3e96da32ea5ef3871ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 19:14:47 addons-723934 dockerd[1283]: time="2024-09-16T19:14:47.095742996Z" level=info msg="ignoring event" container=daaa2b56c4302eaffcecd0b709733c855487f4f72c01fb83ffafe74267045d25 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 19:14:47 addons-723934 dockerd[1283]: time="2024-09-16T19:14:47.207838630Z" level=info msg="ignoring event" container=94413d8cf35d016f8a58572b4bc0ffc02560d7b98e6d5352b9dc66b165052f53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 19:14:47 addons-723934 dockerd[1283]: time="2024-09-16T19:14:47.460418145Z" level=info msg="ignoring event" container=adb43938bb9356c9b6fdb27d2945df310dcd8a32bd54e2852d9753d0d6165ec0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	8bb5e9648ba12       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                3 seconds ago       Running             nginx                      0                   4fe5bbbcfb64d       nginx
	8c9398110f758       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                   0                   3b32e0966066c       gcp-auth-89d5ffd79-ldchs
	65d68fdad8add       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   219e2954269ff       ingress-nginx-controller-bc57996ff-xg54q
	9e217895aed36       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                      0                   18973c7ac2d38       ingress-nginx-admission-patch-v6gxg
	bdbf3c7bf4081       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   26a82d5155330       ingress-nginx-admission-create-hqx8s
	f17642abff71f       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   6a800c9a15319       local-path-provisioner-86d989889c-fhjfc
	65547691ea141       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   840704b2de3e4       yakd-dashboard-67d98fc6b-qhxwh
	daaa2b56c4302       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy             0                   adb43938bb935       registry-proxy-5lk22
	82b0d94aca31c       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns       0                   99854e5b3e183       kube-ingress-dns-minikube
	ca9aca6cb7568       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   b132bfb80c827       cloud-spanner-emulator-769b77f747-w5j7f
	193c59d2fcfc2       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   93008b8bb7f6a       nvidia-device-plugin-daemonset-zjpr7
	61ccb3c8c1867       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   5b7b6a9440aef       storage-provisioner
	c56a0b5f7f8b1       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                    0                   038b8f42ae3be       coredns-7c65d6cfc9-2xqtg
	0261e0b6126d0       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                 0                   2ad2ea9a0d528       kube-proxy-87fbz
	9d8623e031ddf       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   e88f4d9895e90       kube-scheduler-addons-723934
	1eee1af8dd166       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver             0                   278482d3d0b05       kube-apiserver-addons-723934
	0a7e33d6ccf41       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   24db1ffe12ce9       etcd-addons-723934
	d18e53eb68cc7       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   2e87561e23955       kube-controller-manager-addons-723934
	
	
	==> controller_ingress [65d68fdad8ad] <==
	I0916 19:03:23.304175       8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"5d0f4db0-3044-468a-bea4-9e9ffe766d3b", APIVersion:"v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0916 19:03:23.304234       8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"4552a640-3a64-4d06-9282-7399148e33eb", APIVersion:"v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0916 19:03:24.474075       8 nginx.go:317] "Starting NGINX process"
	I0916 19:03:24.474210       8 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0916 19:03:24.476088       8 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0916 19:03:24.476435       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0916 19:03:24.500719       8 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0916 19:03:24.501213       8 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-xg54q"
	I0916 19:03:24.511611       8 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-xg54q" node="addons-723934"
	I0916 19:03:24.544890       8 controller.go:213] "Backend successfully reloaded"
	I0916 19:03:24.545048       8 controller.go:224] "Initial sync, sleeping for 1 second"
	I0916 19:03:24.545379       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-xg54q", UID:"cf7cacab-f993-4cfa-88d2-cea13eb4885e", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0916 19:14:42.698425       8 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0916 19:14:42.716977       8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.018s renderingIngressLength:1 renderingIngressTime:0s admissionTime:0.018s testedConfigurationSize:18.1kB}
	I0916 19:14:42.717244       8 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0916 19:14:42.722218       8 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	W0916 19:14:42.722748       8 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0916 19:14:42.722992       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0916 19:14:42.725649       8 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"5ec529d2-ef4d-4548-a729-28d0509549fd", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2800", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0916 19:14:42.775595       8 controller.go:213] "Backend successfully reloaded"
	I0916 19:14:42.776564       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-xg54q", UID:"cf7cacab-f993-4cfa-88d2-cea13eb4885e", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0916 19:14:46.057322       8 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0916 19:14:46.057439       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0916 19:14:46.126980       8 controller.go:213] "Backend successfully reloaded"
	I0916 19:14:46.127291       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-xg54q", UID:"cf7cacab-f993-4cfa-88d2-cea13eb4885e", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [c56a0b5f7f8b] <==
	[INFO] 10.244.0.7:35900 - 51601 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099904s
	[INFO] 10.244.0.7:54702 - 64167 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002596397s
	[INFO] 10.244.0.7:54702 - 4772 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002140366s
	[INFO] 10.244.0.7:54824 - 56872 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000150593s
	[INFO] 10.244.0.7:54824 - 32789 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000099083s
	[INFO] 10.244.0.7:39572 - 53000 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000125043s
	[INFO] 10.244.0.7:39572 - 15885 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000107764s
	[INFO] 10.244.0.7:59036 - 49059 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085931s
	[INFO] 10.244.0.7:59036 - 24477 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059404s
	[INFO] 10.244.0.7:54411 - 9979 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000056064s
	[INFO] 10.244.0.7:54411 - 28669 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048877s
	[INFO] 10.244.0.7:52971 - 46223 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001493348s
	[INFO] 10.244.0.7:52971 - 16782 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001051447s
	[INFO] 10.244.0.7:53518 - 30287 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000068224s
	[INFO] 10.244.0.7:53518 - 22605 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000064713s
	[INFO] 10.244.0.25:54318 - 45360 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000282233s
	[INFO] 10.244.0.25:32972 - 62934 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000210424s
	[INFO] 10.244.0.25:42178 - 40075 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000252212s
	[INFO] 10.244.0.25:47599 - 47379 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000257315s
	[INFO] 10.244.0.25:56677 - 45282 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000215495s
	[INFO] 10.244.0.25:39948 - 9619 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116469s
	[INFO] 10.244.0.25:57076 - 61412 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.009917603s
	[INFO] 10.244.0.25:49263 - 8571 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.010199533s
	[INFO] 10.244.0.25:32994 - 26333 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005841904s
	[INFO] 10.244.0.25:55203 - 27376 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.005819315s
	
	
	==> describe nodes <==
	Name:               addons-723934
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-723934
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=addons-723934
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T19_01_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-723934
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 19:01:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-723934
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 19:14:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 19:10:37 +0000   Mon, 16 Sep 2024 19:01:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 19:10:37 +0000   Mon, 16 Sep 2024 19:01:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 19:10:37 +0000   Mon, 16 Sep 2024 19:01:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 19:10:37 +0000   Mon, 16 Sep 2024 19:01:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-723934
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5f62c46fb1c41ac913c0e675aa3abbb
	  System UUID:                eeda3266-74e1-4151-885d-57a006f8b436
	  Boot ID:                    263049d5-3451-4987-bf06-0c8b5440bd91
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     cloud-spanner-emulator-769b77f747-w5j7f     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	  gcp-auth                    gcp-auth-89d5ffd79-ldchs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-xg54q    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-2xqtg                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-723934                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-723934                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-723934       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-87fbz                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-723934                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-zjpr7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-fhjfc     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-qhxwh              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (4%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-723934 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x7 over 13m)  kubelet          Node addons-723934 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-723934 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-723934 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-723934 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-723934 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-723934 event: Registered Node addons-723934 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [0a7e33d6ccf4] <==
	{"level":"info","ts":"2024-09-16T19:01:49.810740Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T19:01:49.810598Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T19:01:50.090879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T19:01:50.091102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T19:01:50.091247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-16T19:01:50.091347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T19:01:50.091444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T19:01:50.091591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-16T19:01:50.091682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T19:01:50.097348Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-723934 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T19:01:50.097614Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T19:01:50.098127Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T19:01:50.099169Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T19:01:50.100382Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T19:01:50.101554Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T19:01:50.118952Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T19:01:50.119014Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T19:01:50.103149Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T19:01:50.119134Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T19:01:50.119181Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T19:01:50.104480Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T19:01:50.120265Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T19:11:51.749809Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1876}
	{"level":"info","ts":"2024-09-16T19:11:51.800787Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1876,"took":"50.447237ms","hash":3783404031,"current-db-size-bytes":8708096,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":4870144,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-16T19:11:51.800841Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3783404031,"revision":1876,"compact-revision":-1}
	
	
	==> gcp-auth [8c9398110f75] <==
	2024/09/16 19:04:49 GCP Auth Webhook started!
	2024/09/16 19:05:07 Ready to marshal response ...
	2024/09/16 19:05:07 Ready to write response ...
	2024/09/16 19:05:07 Ready to marshal response ...
	2024/09/16 19:05:07 Ready to write response ...
	2024/09/16 19:05:32 Ready to marshal response ...
	2024/09/16 19:05:32 Ready to write response ...
	2024/09/16 19:05:32 Ready to marshal response ...
	2024/09/16 19:05:32 Ready to write response ...
	2024/09/16 19:05:32 Ready to marshal response ...
	2024/09/16 19:05:32 Ready to write response ...
	2024/09/16 19:13:46 Ready to marshal response ...
	2024/09/16 19:13:46 Ready to write response ...
	2024/09/16 19:13:52 Ready to marshal response ...
	2024/09/16 19:13:52 Ready to write response ...
	2024/09/16 19:14:06 Ready to marshal response ...
	2024/09/16 19:14:06 Ready to write response ...
	2024/09/16 19:14:42 Ready to marshal response ...
	2024/09/16 19:14:42 Ready to write response ...
	
	
	==> kernel <==
	 19:14:48 up  2:57,  0 users,  load average: 0.88, 1.10, 1.89
	Linux addons-723934 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [1eee1af8dd16] <==
	I0916 19:05:23.237966       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0916 19:05:23.465977       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	I0916 19:05:23.543086       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0916 19:05:23.896502       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0916 19:05:23.920591       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0916 19:05:24.041586       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0916 19:05:24.059946       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0916 19:05:24.544021       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0916 19:05:24.593653       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0916 19:13:59.510291       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0916 19:14:23.127308       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 19:14:23.127361       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 19:14:23.156575       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 19:14:23.156891       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 19:14:23.227789       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 19:14:23.228107       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 19:14:23.264567       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 19:14:23.264933       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0916 19:14:24.229541       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0916 19:14:24.264850       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0916 19:14:24.369896       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0916 19:14:36.978442       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 19:14:38.099891       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0916 19:14:42.718560       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0916 19:14:43.037278       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.193.35"}
	
	
	==> kube-controller-manager [d18e53eb68cc] <==
	I0916 19:14:32.237741       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0916 19:14:32.237803       1 shared_informer.go:320] Caches are synced for garbage collector
	W0916 19:14:34.266756       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 19:14:34.266877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 19:14:34.322906       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 19:14:34.322965       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 19:14:36.882082       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 19:14:36.882136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0916 19:14:38.102680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 19:14:39.127080       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 19:14:39.127141       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 19:14:40.929486       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 19:14:40.929548       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 19:14:41.736387       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 19:14:41.736432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 19:14:42.176058       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 19:14:42.176119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 19:14:43.164630       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 19:14:43.164683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 19:14:45.057333       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 19:14:45.057377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 19:14:46.448459       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 19:14:46.448509       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 19:14:46.904452       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.497µs"
	I0916 19:14:47.215149       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	
	
	==> kube-proxy [0261e0b6126d] <==
	I0916 19:02:03.464489       1 server_linux.go:66] "Using iptables proxy"
	I0916 19:02:03.557734       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 19:02:03.557803       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 19:02:03.621426       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 19:02:03.621507       1 server_linux.go:169] "Using iptables Proxier"
	I0916 19:02:03.628271       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 19:02:03.628593       1 server.go:483] "Version info" version="v1.31.1"
	I0916 19:02:03.628608       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 19:02:03.643152       1 config.go:199] "Starting service config controller"
	I0916 19:02:03.643192       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 19:02:03.643216       1 config.go:105] "Starting endpoint slice config controller"
	I0916 19:02:03.643221       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 19:02:03.649063       1 config.go:328] "Starting node config controller"
	I0916 19:02:03.649146       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 19:02:03.743299       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 19:02:03.743361       1 shared_informer.go:320] Caches are synced for service config
	I0916 19:02:03.750527       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9d8623e031dd] <==
	W0916 19:01:55.619881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 19:01:55.619906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 19:01:55.619989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 19:01:55.620006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 19:01:55.620169       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 19:01:55.620307       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 19:01:55.620401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 19:01:55.620428       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 19:01:55.620493       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 19:01:55.620512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 19:01:55.620594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 19:01:55.620640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 19:01:55.627848       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0916 19:01:55.628231       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 19:01:55.628184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 19:01:55.627930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 19:01:55.628322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 19:01:55.628046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 19:01:55.628258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 19:01:55.628363       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 19:01:55.628093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 19:01:55.628383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 19:01:55.628150       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 19:01:55.628408       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0916 19:01:57.008797       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 19:14:42 addons-723934 kubelet[2354]: I0916 19:14:42.967612    2354 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8004041-1d9e-4df5-9a52-1ab543cc3edb" containerName="gadget"
	Sep 16 19:14:42 addons-723934 kubelet[2354]: I0916 19:14:42.967761    2354 memory_manager.go:354] "RemoveStaleState removing state" podUID="66f95b4c-1ce6-4f5e-9c88-cb2e4c5b2507" containerName="hostpath"
	Sep 16 19:14:42 addons-723934 kubelet[2354]: I0916 19:14:42.967885    2354 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8004041-1d9e-4df5-9a52-1ab543cc3edb" containerName="gadget"
	Sep 16 19:14:42 addons-723934 kubelet[2354]: I0916 19:14:42.967970    2354 memory_manager.go:354] "RemoveStaleState removing state" podUID="cec58aa6-2dea-4621-949c-1115f8aa049d" containerName="csi-attacher"
	Sep 16 19:14:42 addons-723934 kubelet[2354]: I0916 19:14:42.967994    2354 memory_manager.go:354] "RemoveStaleState removing state" podUID="66f95b4c-1ce6-4f5e-9c88-cb2e4c5b2507" containerName="csi-provisioner"
	Sep 16 19:14:43 addons-723934 kubelet[2354]: I0916 19:14:43.041506    2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prrln\" (UniqueName: \"kubernetes.io/projected/bcdcb714-a1ad-4c1b-b02c-b2bbbe51b8fe-kube-api-access-prrln\") pod \"nginx\" (UID: \"bcdcb714-a1ad-4c1b-b02c-b2bbbe51b8fe\") " pod="default/nginx"
	Sep 16 19:14:43 addons-723934 kubelet[2354]: I0916 19:14:43.041748    2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bcdcb714-a1ad-4c1b-b02c-b2bbbe51b8fe-gcp-creds\") pod \"nginx\" (UID: \"bcdcb714-a1ad-4c1b-b02c-b2bbbe51b8fe\") " pod="default/nginx"
	Sep 16 19:14:46 addons-723934 kubelet[2354]: I0916 19:14:46.458661    2354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=2.280049372 podStartE2EDuration="4.45862666s" podCreationTimestamp="2024-09-16 19:14:42 +0000 UTC" firstStartedPulling="2024-09-16 19:14:43.573686827 +0000 UTC m=+766.454453211" lastFinishedPulling="2024-09-16 19:14:45.752264115 +0000 UTC m=+768.633030499" observedRunningTime="2024-09-16 19:14:46.342871047 +0000 UTC m=+769.223637448" watchObservedRunningTime="2024-09-16 19:14:46.45862666 +0000 UTC m=+769.339393052"
	Sep 16 19:14:46 addons-723934 kubelet[2354]: I0916 19:14:46.487199    2354 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7f695ab5-6f17-4ab5-a297-0b004a6aa802-gcp-creds\") pod \"7f695ab5-6f17-4ab5-a297-0b004a6aa802\" (UID: \"7f695ab5-6f17-4ab5-a297-0b004a6aa802\") "
	Sep 16 19:14:46 addons-723934 kubelet[2354]: I0916 19:14:46.487282    2354 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f695ab5-6f17-4ab5-a297-0b004a6aa802-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "7f695ab5-6f17-4ab5-a297-0b004a6aa802" (UID: "7f695ab5-6f17-4ab5-a297-0b004a6aa802"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 19:14:46 addons-723934 kubelet[2354]: I0916 19:14:46.487345    2354 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh567\" (UniqueName: \"kubernetes.io/projected/7f695ab5-6f17-4ab5-a297-0b004a6aa802-kube-api-access-xh567\") pod \"7f695ab5-6f17-4ab5-a297-0b004a6aa802\" (UID: \"7f695ab5-6f17-4ab5-a297-0b004a6aa802\") "
	Sep 16 19:14:46 addons-723934 kubelet[2354]: I0916 19:14:46.487737    2354 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7f695ab5-6f17-4ab5-a297-0b004a6aa802-gcp-creds\") on node \"addons-723934\" DevicePath \"\""
	Sep 16 19:14:46 addons-723934 kubelet[2354]: I0916 19:14:46.489315    2354 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f695ab5-6f17-4ab5-a297-0b004a6aa802-kube-api-access-xh567" (OuterVolumeSpecName: "kube-api-access-xh567") pod "7f695ab5-6f17-4ab5-a297-0b004a6aa802" (UID: "7f695ab5-6f17-4ab5-a297-0b004a6aa802"). InnerVolumeSpecName "kube-api-access-xh567". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 19:14:46 addons-723934 kubelet[2354]: I0916 19:14:46.588864    2354 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xh567\" (UniqueName: \"kubernetes.io/projected/7f695ab5-6f17-4ab5-a297-0b004a6aa802-kube-api-access-xh567\") on node \"addons-723934\" DevicePath \"\""
	Sep 16 19:14:47 addons-723934 kubelet[2354]: I0916 19:14:47.396766    2354 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd6f9\" (UniqueName: \"kubernetes.io/projected/abab3938-5ca1-4f67-bec8-0f5518fa637b-kube-api-access-fd6f9\") pod \"abab3938-5ca1-4f67-bec8-0f5518fa637b\" (UID: \"abab3938-5ca1-4f67-bec8-0f5518fa637b\") "
	Sep 16 19:14:47 addons-723934 kubelet[2354]: I0916 19:14:47.400289    2354 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abab3938-5ca1-4f67-bec8-0f5518fa637b-kube-api-access-fd6f9" (OuterVolumeSpecName: "kube-api-access-fd6f9") pod "abab3938-5ca1-4f67-bec8-0f5518fa637b" (UID: "abab3938-5ca1-4f67-bec8-0f5518fa637b"). InnerVolumeSpecName "kube-api-access-fd6f9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 19:14:47 addons-723934 kubelet[2354]: I0916 19:14:47.407058    2354 scope.go:117] "RemoveContainer" containerID="f16214a0464efd9d830bad1bde27c83649b325ee2ca4d3e96da32ea5ef3871ff"
	Sep 16 19:14:47 addons-723934 kubelet[2354]: I0916 19:14:47.493641    2354 scope.go:117] "RemoveContainer" containerID="f16214a0464efd9d830bad1bde27c83649b325ee2ca4d3e96da32ea5ef3871ff"
	Sep 16 19:14:47 addons-723934 kubelet[2354]: E0916 19:14:47.494782    2354 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: f16214a0464efd9d830bad1bde27c83649b325ee2ca4d3e96da32ea5ef3871ff" containerID="f16214a0464efd9d830bad1bde27c83649b325ee2ca4d3e96da32ea5ef3871ff"
	Sep 16 19:14:47 addons-723934 kubelet[2354]: I0916 19:14:47.494964    2354 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"f16214a0464efd9d830bad1bde27c83649b325ee2ca4d3e96da32ea5ef3871ff"} err="failed to get container status \"f16214a0464efd9d830bad1bde27c83649b325ee2ca4d3e96da32ea5ef3871ff\": rpc error: code = Unknown desc = Error response from daemon: No such container: f16214a0464efd9d830bad1bde27c83649b325ee2ca4d3e96da32ea5ef3871ff"
	Sep 16 19:14:47 addons-723934 kubelet[2354]: I0916 19:14:47.497383    2354 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fd6f9\" (UniqueName: \"kubernetes.io/projected/abab3938-5ca1-4f67-bec8-0f5518fa637b-kube-api-access-fd6f9\") on node \"addons-723934\" DevicePath \"\""
	Sep 16 19:14:47 addons-723934 kubelet[2354]: I0916 19:14:47.598281    2354 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfqfh\" (UniqueName: \"kubernetes.io/projected/095802f7-441a-4970-b3a6-8d88eab7bb43-kube-api-access-mfqfh\") pod \"095802f7-441a-4970-b3a6-8d88eab7bb43\" (UID: \"095802f7-441a-4970-b3a6-8d88eab7bb43\") "
	Sep 16 19:14:47 addons-723934 kubelet[2354]: I0916 19:14:47.601123    2354 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/095802f7-441a-4970-b3a6-8d88eab7bb43-kube-api-access-mfqfh" (OuterVolumeSpecName: "kube-api-access-mfqfh") pod "095802f7-441a-4970-b3a6-8d88eab7bb43" (UID: "095802f7-441a-4970-b3a6-8d88eab7bb43"). InnerVolumeSpecName "kube-api-access-mfqfh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 19:14:47 addons-723934 kubelet[2354]: I0916 19:14:47.699648    2354 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mfqfh\" (UniqueName: \"kubernetes.io/projected/095802f7-441a-4970-b3a6-8d88eab7bb43-kube-api-access-mfqfh\") on node \"addons-723934\" DevicePath \"\""
	Sep 16 19:14:48 addons-723934 kubelet[2354]: I0916 19:14:48.566688    2354 scope.go:117] "RemoveContainer" containerID="daaa2b56c4302eaffcecd0b709733c855487f4f72c01fb83ffafe74267045d25"
	
	
	==> storage-provisioner [61ccb3c8c186] <==
	I0916 19:02:09.105870       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 19:02:09.119098       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 19:02:09.119147       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 19:02:09.134314       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 19:02:09.136541       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-723934_e2471518-d79f-448f-a55d-847e3eac870e!
	I0916 19:02:09.147555       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af3c9964-612b-4107-af35-f3f8a98d5588", APIVersion:"v1", ResourceVersion:"532", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-723934_e2471518-d79f-448f-a55d-847e3eac870e became leader
	I0916 19:02:09.237625       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-723934_e2471518-d79f-448f-a55d-847e3eac870e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-723934 -n addons-723934
helpers_test.go:261: (dbg) Run:  kubectl --context addons-723934 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-hqx8s ingress-nginx-admission-patch-v6gxg
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-723934 describe pod busybox ingress-nginx-admission-create-hqx8s ingress-nginx-admission-patch-v6gxg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-723934 describe pod busybox ingress-nginx-admission-create-hqx8s ingress-nginx-admission-patch-v6gxg: exit status 1 (101.524252ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-723934/192.168.49.2
	Start Time:       Mon, 16 Sep 2024 19:05:32 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nfg45 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nfg45:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-723934
	  Normal   Pulling    7m52s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m52s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m52s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m38s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m10s (x21 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hqx8s" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-v6gxg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-723934 describe pod busybox ingress-nginx-admission-create-hqx8s ingress-nginx-admission-patch-v6gxg: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.74s)

                                                
                                    

Test pass (318/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.33
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 7.43
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.22
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
22 TestOffline 61.25
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 225.69
29 TestAddons/serial/Volcano 41.36
31 TestAddons/serial/GCPAuth/Namespaces 0.21
34 TestAddons/parallel/Ingress 21.81
35 TestAddons/parallel/InspektorGadget 11.9
36 TestAddons/parallel/MetricsServer 6.85
39 TestAddons/parallel/CSI 47.77
40 TestAddons/parallel/Headlamp 17.99
41 TestAddons/parallel/CloudSpanner 6.53
42 TestAddons/parallel/LocalPath 56.18
43 TestAddons/parallel/NvidiaDevicePlugin 5.51
44 TestAddons/parallel/Yakd 11.75
45 TestAddons/StoppedEnableDisable 11.19
46 TestCertOptions 44.79
47 TestCertExpiration 246.66
48 TestDockerFlags 42.41
49 TestForceSystemdFlag 45.19
50 TestForceSystemdEnv 45.86
56 TestErrorSpam/setup 32.57
57 TestErrorSpam/start 0.78
58 TestErrorSpam/status 1.1
59 TestErrorSpam/pause 1.42
60 TestErrorSpam/unpause 1.59
61 TestErrorSpam/stop 11.06
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 69.43
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 34.67
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.11
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.49
73 TestFunctional/serial/CacheCmd/cache/add_local 1.02
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.15
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 44.74
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.18
84 TestFunctional/serial/LogsFileCmd 1.28
85 TestFunctional/serial/InvalidService 4.97
87 TestFunctional/parallel/ConfigCmd 0.66
88 TestFunctional/parallel/DashboardCmd 10.37
89 TestFunctional/parallel/DryRun 0.41
90 TestFunctional/parallel/InternationalLanguage 0.21
91 TestFunctional/parallel/StatusCmd 1.06
95 TestFunctional/parallel/ServiceCmdConnect 10.82
96 TestFunctional/parallel/AddonsCmd 0.19
97 TestFunctional/parallel/PersistentVolumeClaim 28.68
99 TestFunctional/parallel/SSHCmd 0.85
100 TestFunctional/parallel/CpCmd 1.96
102 TestFunctional/parallel/FileSync 0.3
103 TestFunctional/parallel/CertSync 1.76
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
111 TestFunctional/parallel/License 0.26
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.8
114 TestFunctional/parallel/Version/short 0.07
115 TestFunctional/parallel/Version/components 1.02
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.4
119 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
123 TestFunctional/parallel/ImageCommands/ImageBuild 3.66
124 TestFunctional/parallel/ImageCommands/Setup 0.79
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.16
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.12
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.66
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
132 TestFunctional/parallel/DockerEnv/bash 1.08
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.26
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
142 TestFunctional/parallel/MountCmd/any-port 8.41
143 TestFunctional/parallel/MountCmd/specific-port 2.67
144 TestFunctional/parallel/MountCmd/VerifyCleanup 1.79
145 TestFunctional/parallel/ServiceCmd/DeployApp 8.23
146 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
147 TestFunctional/parallel/ProfileCmd/profile_list 0.39
148 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
149 TestFunctional/parallel/ServiceCmd/List 0.59
150 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.65
152 TestFunctional/parallel/ServiceCmd/Format 0.62
153 TestFunctional/parallel/ServiceCmd/URL 0.51
154 TestFunctional/delete_echo-server_images 0.05
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 132.38
161 TestMultiControlPlane/serial/DeployApp 54.8
162 TestMultiControlPlane/serial/PingHostFromPods 1.83
163 TestMultiControlPlane/serial/AddWorkerNode 28.74
164 TestMultiControlPlane/serial/NodeLabels 0.14
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
166 TestMultiControlPlane/serial/CopyFile 20.97
167 TestMultiControlPlane/serial/StopSecondaryNode 11.88
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.63
169 TestMultiControlPlane/serial/RestartSecondaryNode 72.84
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.46
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 251.51
172 TestMultiControlPlane/serial/DeleteSecondaryNode 11.39
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
174 TestMultiControlPlane/serial/StopCluster 33.2
175 TestMultiControlPlane/serial/RestartCluster 101.22
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.57
177 TestMultiControlPlane/serial/AddSecondaryNode 44.93
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
181 TestImageBuild/serial/Setup 32.02
182 TestImageBuild/serial/NormalBuild 1.91
183 TestImageBuild/serial/BuildWithBuildArg 1.19
184 TestImageBuild/serial/BuildWithDockerIgnore 0.8
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.88
189 TestJSONOutput/start/Command 44.91
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.65
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.57
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.83
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.23
214 TestKicCustomNetwork/create_custom_network 34.25
215 TestKicCustomNetwork/use_default_bridge_network 37.73
216 TestKicExistingNetwork 32.83
217 TestKicCustomSubnet 36.51
218 TestKicStaticIP 36.53
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 70.69
223 TestMountStart/serial/StartWithMountFirst 8.2
224 TestMountStart/serial/VerifyMountFirst 0.27
225 TestMountStart/serial/StartWithMountSecond 9.25
226 TestMountStart/serial/VerifyMountSecond 0.28
227 TestMountStart/serial/DeleteFirst 1.5
228 TestMountStart/serial/VerifyMountPostDelete 0.28
229 TestMountStart/serial/Stop 1.25
230 TestMountStart/serial/RestartStopped 8.85
231 TestMountStart/serial/VerifyMountPostStop 0.49
234 TestMultiNode/serial/FreshStart2Nodes 71.37
235 TestMultiNode/serial/DeployApp2Nodes 36.85
236 TestMultiNode/serial/PingHostFrom2Pods 1.07
237 TestMultiNode/serial/AddNode 20.73
238 TestMultiNode/serial/MultiNodeLabels 0.12
239 TestMultiNode/serial/ProfileList 0.43
240 TestMultiNode/serial/CopyFile 11.03
241 TestMultiNode/serial/StopNode 2.34
242 TestMultiNode/serial/StartAfterStop 11.32
243 TestMultiNode/serial/RestartKeepsNodes 104.27
244 TestMultiNode/serial/DeleteNode 5.75
245 TestMultiNode/serial/StopMultiNode 21.66
246 TestMultiNode/serial/RestartMultiNode 59.72
247 TestMultiNode/serial/ValidateNameConflict 36.8
252 TestPreload 145.87
254 TestScheduledStopUnix 108.5
255 TestSkaffold 121.53
257 TestInsufficientStorage 11.65
258 TestRunningBinaryUpgrade 132.29
260 TestKubernetesUpgrade 374.73
261 TestMissingContainerUpgrade 120.42
273 TestStoppedBinaryUpgrade/Setup 0.97
274 TestStoppedBinaryUpgrade/Upgrade 94.39
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.73
277 TestPause/serial/Start 80.56
278 TestPause/serial/SecondStartNoReconfiguration 34.48
279 TestPause/serial/Pause 0.64
280 TestPause/serial/VerifyStatus 0.34
281 TestPause/serial/Unpause 0.55
282 TestPause/serial/PauseAgain 1.17
283 TestPause/serial/DeletePaused 2.26
284 TestPause/serial/VerifyDeletedResources 14.84
293 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
294 TestNoKubernetes/serial/StartWithK8s 36.45
295 TestNoKubernetes/serial/StartWithStopK8s 19.95
296 TestNoKubernetes/serial/Start 8.92
297 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
298 TestNoKubernetes/serial/ProfileList 17.33
299 TestNetworkPlugins/group/auto/Start 83.57
300 TestNoKubernetes/serial/Stop 1.29
301 TestNoKubernetes/serial/StartNoArgs 9.59
302 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
303 TestNetworkPlugins/group/kindnet/Start 74.94
304 TestNetworkPlugins/group/auto/KubeletFlags 0.39
305 TestNetworkPlugins/group/auto/NetCatPod 11.37
306 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
307 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
308 TestNetworkPlugins/group/kindnet/NetCatPod 10.32
309 TestNetworkPlugins/group/auto/DNS 0.24
310 TestNetworkPlugins/group/auto/Localhost 0.18
311 TestNetworkPlugins/group/auto/HairPin 0.2
312 TestNetworkPlugins/group/kindnet/DNS 0.3
313 TestNetworkPlugins/group/kindnet/Localhost 0.23
314 TestNetworkPlugins/group/kindnet/HairPin 0.24
315 TestNetworkPlugins/group/calico/Start 92.35
316 TestNetworkPlugins/group/custom-flannel/Start 63.54
317 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.44
318 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.41
319 TestNetworkPlugins/group/custom-flannel/DNS 0.22
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
322 TestNetworkPlugins/group/calico/ControllerPod 6.02
323 TestNetworkPlugins/group/calico/KubeletFlags 0.41
324 TestNetworkPlugins/group/calico/NetCatPod 12.39
325 TestNetworkPlugins/group/calico/DNS 0.51
326 TestNetworkPlugins/group/calico/Localhost 0.25
327 TestNetworkPlugins/group/calico/HairPin 0.21
328 TestNetworkPlugins/group/false/Start 85.55
329 TestNetworkPlugins/group/enable-default-cni/Start 84.44
330 TestNetworkPlugins/group/false/KubeletFlags 0.36
331 TestNetworkPlugins/group/false/NetCatPod 11.29
332 TestNetworkPlugins/group/false/DNS 0.2
333 TestNetworkPlugins/group/false/Localhost 0.24
334 TestNetworkPlugins/group/false/HairPin 0.18
335 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
336 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.36
337 TestNetworkPlugins/group/flannel/Start 63.42
338 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
339 TestNetworkPlugins/group/enable-default-cni/Localhost 0.34
340 TestNetworkPlugins/group/enable-default-cni/HairPin 0.27
341 TestNetworkPlugins/group/bridge/Start 88.18
342 TestNetworkPlugins/group/flannel/ControllerPod 6.01
343 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
344 TestNetworkPlugins/group/flannel/NetCatPod 12.32
345 TestNetworkPlugins/group/flannel/DNS 0.21
346 TestNetworkPlugins/group/flannel/Localhost 0.18
347 TestNetworkPlugins/group/flannel/HairPin 0.16
348 TestNetworkPlugins/group/kubenet/Start 76.26
349 TestNetworkPlugins/group/bridge/KubeletFlags 0.43
350 TestNetworkPlugins/group/bridge/NetCatPod 12.33
351 TestNetworkPlugins/group/bridge/DNS 0.3
352 TestNetworkPlugins/group/bridge/Localhost 0.39
353 TestNetworkPlugins/group/bridge/HairPin 0.3
355 TestStartStop/group/old-k8s-version/serial/FirstStart 148.64
356 TestNetworkPlugins/group/kubenet/KubeletFlags 0.38
357 TestNetworkPlugins/group/kubenet/NetCatPod 12.36
358 TestNetworkPlugins/group/kubenet/DNS 0.28
359 TestNetworkPlugins/group/kubenet/Localhost 0.27
360 TestNetworkPlugins/group/kubenet/HairPin 0.34
362 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 71.89
363 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.44
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.25
365 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.06
366 TestStartStop/group/old-k8s-version/serial/DeployApp 10.66
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
368 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 269.29
369 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.42
370 TestStartStop/group/old-k8s-version/serial/Stop 11.18
371 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.3
372 TestStartStop/group/old-k8s-version/serial/SecondStart 145.24
373 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
375 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
376 TestStartStop/group/old-k8s-version/serial/Pause 2.97
378 TestStartStop/group/embed-certs/serial/FirstStart 46.52
379 TestStartStop/group/embed-certs/serial/DeployApp 10.36
380 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
381 TestStartStop/group/embed-certs/serial/Stop 11
382 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
383 TestStartStop/group/embed-certs/serial/SecondStart 268.59
384 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
386 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
387 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.03
389 TestStartStop/group/no-preload/serial/FirstStart 50.65
390 TestStartStop/group/no-preload/serial/DeployApp 8.44
391 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.21
392 TestStartStop/group/no-preload/serial/Stop 11.05
393 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
394 TestStartStop/group/no-preload/serial/SecondStart 267.76
395 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
396 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
397 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
398 TestStartStop/group/embed-certs/serial/Pause 3.14
400 TestStartStop/group/newest-cni/serial/FirstStart 39.25
401 TestStartStop/group/newest-cni/serial/DeployApp 0
402 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.35
403 TestStartStop/group/newest-cni/serial/Stop 11.07
404 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
405 TestStartStop/group/newest-cni/serial/SecondStart 19.91
406 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
408 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
409 TestStartStop/group/newest-cni/serial/Pause 3.19
410 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
411 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
412 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
413 TestStartStop/group/no-preload/serial/Pause 2.87
x
+
TestDownloadOnly/v1.20.0/json-events (14.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-555988 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-555988 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (14.326028954s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (14.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-555988
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-555988: exit status 85 (71.44145ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-555988 | jenkins | v1.34.0 | 16 Sep 24 19:00 UTC |          |
	|         | -p download-only-555988        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 19:00:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 19:00:40.938310  572846 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:00:40.938724  572846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:00:40.938738  572846 out.go:358] Setting ErrFile to fd 2...
	I0916 19:00:40.938745  572846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:00:40.939007  572846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-567461/.minikube/bin
	W0916 19:00:40.939163  572846 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19649-567461/.minikube/config/config.json: open /home/jenkins/minikube-integration/19649-567461/.minikube/config/config.json: no such file or directory
	I0916 19:00:40.939616  572846 out.go:352] Setting JSON to true
	I0916 19:00:40.940432  572846 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9779,"bootTime":1726503462,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0916 19:00:40.940518  572846 start.go:139] virtualization:  
	I0916 19:00:40.942753  572846 out.go:97] [download-only-555988] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0916 19:00:40.942914  572846 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19649-567461/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 19:00:40.943042  572846 notify.go:220] Checking for updates...
	I0916 19:00:40.944609  572846 out.go:169] MINIKUBE_LOCATION=19649
	I0916 19:00:40.946044  572846 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 19:00:40.947482  572846 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19649-567461/kubeconfig
	I0916 19:00:40.948966  572846 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-567461/.minikube
	I0916 19:00:40.950401  572846 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0916 19:00:40.954059  572846 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 19:00:40.954414  572846 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 19:00:40.975490  572846 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 19:00:40.975619  572846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:00:41.038012  572846 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-16 19:00:41.027228603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:00:41.038138  572846 docker.go:318] overlay module found
	I0916 19:00:41.039582  572846 out.go:97] Using the docker driver based on user configuration
	I0916 19:00:41.039608  572846 start.go:297] selected driver: docker
	I0916 19:00:41.039615  572846 start.go:901] validating driver "docker" against <nil>
	I0916 19:00:41.039714  572846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:00:41.090674  572846 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-16 19:00:41.081382779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:00:41.090933  572846 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 19:00:41.091239  572846 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0916 19:00:41.091397  572846 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 19:00:41.093198  572846 out.go:169] Using Docker driver with root privileges
	I0916 19:00:41.094382  572846 cni.go:84] Creating CNI manager for ""
	I0916 19:00:41.094458  572846 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 19:00:41.094547  572846 start.go:340] cluster config:
	{Name:download-only-555988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-555988 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:00:41.096124  572846 out.go:97] Starting "download-only-555988" primary control-plane node in "download-only-555988" cluster
	I0916 19:00:41.096156  572846 cache.go:121] Beginning downloading kic base image for docker with docker
	I0916 19:00:41.097536  572846 out.go:97] Pulling base image v0.0.45-1726481311-19649 ...
	I0916 19:00:41.097591  572846 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 19:00:41.097664  572846 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local docker daemon
	I0916 19:00:41.112902  572846 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc to local cache
	I0916 19:00:41.113091  572846 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory
	I0916 19:00:41.113195  572846 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc to local cache
	I0916 19:00:41.149347  572846 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0916 19:00:41.149386  572846 cache.go:56] Caching tarball of preloaded images
	I0916 19:00:41.149565  572846 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 19:00:41.151631  572846 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0916 19:00:41.151670  572846 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0916 19:00:41.238518  572846 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19649-567461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0916 19:00:47.491188  572846 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0916 19:00:47.491293  572846 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19649-567461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0916 19:00:48.521269  572846 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0916 19:00:48.521659  572846 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/download-only-555988/config.json ...
	I0916 19:00:48.521693  572846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/download-only-555988/config.json: {Name:mk06c45e5f791349b9132aee975218de532840b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:00:48.521874  572846 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 19:00:48.522066  572846 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19649-567461/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-555988 host does not exist
	  To start a cluster, run: "minikube start -p download-only-555988"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-555988
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (7.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-078157 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-078157 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.427667633s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (7.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-078157
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-078157: exit status 85 (77.389715ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-555988 | jenkins | v1.34.0 | 16 Sep 24 19:00 UTC |                     |
	|         | -p download-only-555988        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Sep 24 19:00 UTC | 16 Sep 24 19:00 UTC |
	| delete  | -p download-only-555988        | download-only-555988 | jenkins | v1.34.0 | 16 Sep 24 19:00 UTC | 16 Sep 24 19:00 UTC |
	| start   | -o=json --download-only        | download-only-078157 | jenkins | v1.34.0 | 16 Sep 24 19:00 UTC |                     |
	|         | -p download-only-078157        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 19:00:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 19:00:55.684582  573047 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:00:55.684832  573047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:00:55.684860  573047 out.go:358] Setting ErrFile to fd 2...
	I0916 19:00:55.684879  573047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:00:55.685179  573047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-567461/.minikube/bin
	I0916 19:00:55.685653  573047 out.go:352] Setting JSON to true
	I0916 19:00:55.686687  573047 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9794,"bootTime":1726503462,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0916 19:00:55.686798  573047 start.go:139] virtualization:  
	I0916 19:00:55.689043  573047 out.go:97] [download-only-078157] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0916 19:00:55.689355  573047 notify.go:220] Checking for updates...
	I0916 19:00:55.691709  573047 out.go:169] MINIKUBE_LOCATION=19649
	I0916 19:00:55.693503  573047 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 19:00:55.695035  573047 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19649-567461/kubeconfig
	I0916 19:00:55.696320  573047 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-567461/.minikube
	I0916 19:00:55.697728  573047 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0916 19:00:55.700992  573047 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 19:00:55.701363  573047 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 19:00:55.725099  573047 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 19:00:55.725282  573047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:00:55.800686  573047 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-16 19:00:55.789848644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:00:55.800810  573047 docker.go:318] overlay module found
	I0916 19:00:55.802565  573047 out.go:97] Using the docker driver based on user configuration
	I0916 19:00:55.802600  573047 start.go:297] selected driver: docker
	I0916 19:00:55.802608  573047 start.go:901] validating driver "docker" against <nil>
	I0916 19:00:55.802723  573047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:00:55.858680  573047 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-16 19:00:55.848983672 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:00:55.858919  573047 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 19:00:55.859222  573047 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0916 19:00:55.859401  573047 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 19:00:55.861331  573047 out.go:169] Using Docker driver with root privileges
	I0916 19:00:55.862683  573047 cni.go:84] Creating CNI manager for ""
	I0916 19:00:55.862761  573047 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 19:00:55.862777  573047 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 19:00:55.862922  573047 start.go:340] cluster config:
	{Name:download-only-078157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-078157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:00:55.864568  573047 out.go:97] Starting "download-only-078157" primary control-plane node in "download-only-078157" cluster
	I0916 19:00:55.864603  573047 cache.go:121] Beginning downloading kic base image for docker with docker
	I0916 19:00:55.867017  573047 out.go:97] Pulling base image v0.0.45-1726481311-19649 ...
	I0916 19:00:55.867081  573047 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 19:00:55.867197  573047 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local docker daemon
	I0916 19:00:55.884754  573047 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc to local cache
	I0916 19:00:55.884878  573047 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory
	I0916 19:00:55.884913  573047 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc in local cache directory, skipping pull
	I0916 19:00:55.884919  573047 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc exists in cache, skipping pull
	I0916 19:00:55.884926  573047 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc as a tarball
	I0916 19:00:55.947248  573047 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 19:00:55.947275  573047 cache.go:56] Caching tarball of preloaded images
	I0916 19:00:55.947437  573047 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 19:00:55.949445  573047 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0916 19:00:55.949481  573047 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0916 19:00:56.043407  573047 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /home/jenkins/minikube-integration/19649-567461/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-078157 host does not exist
	  To start a cluster, run: "minikube start -p download-only-078157"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-078157
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-053813 --alsologtostderr --binary-mirror http://127.0.0.1:36551 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-053813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-053813
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (61.25s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-396931 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-396931 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (58.925250424s)
helpers_test.go:175: Cleaning up "offline-docker-396931" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-396931
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-396931: (2.321780426s)
--- PASS: TestOffline (61.25s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-723934
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-723934: exit status 85 (70.403919ms)

                                                
                                                
-- stdout --
	* Profile "addons-723934" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-723934"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-723934
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-723934: exit status 85 (68.352437ms)

                                                
                                                
-- stdout --
	* Profile "addons-723934" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-723934"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (225.69s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-723934 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-723934 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m45.693124839s)
--- PASS: TestAddons/Setup (225.69s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.36s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 56.689155ms
addons_test.go:897: volcano-scheduler stabilized in 57.091165ms
addons_test.go:905: volcano-admission stabilized in 57.556615ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-dmml7" [8c4343a9-a256-4217-bf0f-6e9db0156e12] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004019557s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-klnqw" [c2a45d53-3f7b-4fea-b31f-cebc8859d85f] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003869532s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-cv2gc" [8d582356-0326-43f0-b439-4bee39b1ccb4] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004292421s
addons_test.go:932: (dbg) Run:  kubectl --context addons-723934 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-723934 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-723934 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [4bf9e595-20a8-4cdf-81d7-3103c6ba0e04] Pending
helpers_test.go:344: "test-job-nginx-0" [4bf9e595-20a8-4cdf-81d7-3103c6ba0e04] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [4bf9e595-20a8-4cdf-81d7-3103c6ba0e04] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003779544s
addons_test.go:968: (dbg) Run:  out/minikube-linux-arm64 -p addons-723934 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-arm64 -p addons-723934 addons disable volcano --alsologtostderr -v=1: (10.576989147s)
--- PASS: TestAddons/serial/Volcano (41.36s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-723934 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-723934 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-723934 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-723934 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-723934 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bcdcb714-a1ad-4c1b-b02c-b2bbbe51b8fe] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bcdcb714-a1ad-4c1b-b02c-b2bbbe51b8fe] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003523036s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-723934 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-723934 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-723934 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-723934 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-723934 addons disable ingress-dns --alsologtostderr -v=1: (1.728101689s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-723934 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-723934 addons disable ingress --alsologtostderr -v=1: (8.285936374s)
--- PASS: TestAddons/parallel/Ingress (21.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.9s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-d5scq" [b8004041-1d9e-4df5-9a52-1ab543cc3edb] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004515812s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-723934
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-723934: (5.889681155s)
--- PASS: TestAddons/parallel/InspektorGadget (11.90s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.688955ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-z5ddv" [cda63f7f-b04a-4a4b-ab51-d1ef4c36f179] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003725007s
addons_test.go:417: (dbg) Run:  kubectl --context addons-723934 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-723934 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.936985ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-723934 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-723934 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [bef73408-e967-40f8-9201-ec354f71758b] Pending
helpers_test.go:344: "task-pv-pod" [bef73408-e967-40f8-9201-ec354f71758b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [bef73408-e967-40f8-9201-ec354f71758b] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004791347s
addons_test.go:590: (dbg) Run:  kubectl --context addons-723934 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-723934 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-723934 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-723934 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-723934 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-723934 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-723934 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [73a55912-9e45-4794-83c2-aa8b9cb070a4] Pending
helpers_test.go:344: "task-pv-pod-restore" [73a55912-9e45-4794-83c2-aa8b9cb070a4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [73a55912-9e45-4794-83c2-aa8b9cb070a4] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004747423s
addons_test.go:632: (dbg) Run:  kubectl --context addons-723934 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-723934 delete pod task-pv-pod-restore: (1.405062127s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-723934 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-723934 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-723934 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-723934 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.792611256s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-723934 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-723934 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-cpq7s" [8c172b48-3990-4491-9d44-e271f0730547] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-cpq7s" [8c172b48-3990-4491-9d44-e271f0730547] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-cpq7s" [8c172b48-3990-4491-9d44-e271f0730547] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004043224s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-723934 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-723934 addons disable headlamp --alsologtostderr -v=1: (6.039840995s)
--- PASS: TestAddons/parallel/Headlamp (17.99s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-w5j7f" [410ce7aa-05f2-4cd1-bb81-64470c0c4c09] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003087942s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-723934
--- PASS: TestAddons/parallel/CloudSpanner (6.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.18s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-723934 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-723934 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723934 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [34d7a69f-aaa4-48a0-b0f5-b39c7761a22c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [34d7a69f-aaa4-48a0-b0f5-b39c7761a22c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [34d7a69f-aaa4-48a0-b0f5-b39c7761a22c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004485725s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-723934 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-723934 ssh "cat /opt/local-path-provisioner/pvc-f6224d51-58bd-4e9d-8d53-f114cdcaaefd_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-723934 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-723934 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-723934 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-723934 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.819456383s)
--- PASS: TestAddons/parallel/LocalPath (56.18s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zjpr7" [7b3f3ae7-3136-4b99-96ae-17db09d0cc53] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004388197s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-723934
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-qhxwh" [efa030d6-8736-4b4c-b49b-c7f6c5fc0ff3] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004613726s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-723934 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-723934 addons disable yakd --alsologtostderr -v=1: (5.739234797s)
--- PASS: TestAddons/parallel/Yakd (11.75s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.19s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-723934
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-723934: (10.917935121s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-723934
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-723934
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-723934
--- PASS: TestAddons/StoppedEnableDisable (11.19s)

                                                
                                    
x
+
TestCertOptions (44.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-436982 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0916 19:54:35.138941  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:54:50.677127  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-436982 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (41.980411346s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-436982 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-436982 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-436982 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-436982" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-436982
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-436982: (2.104066912s)
--- PASS: TestCertOptions (44.79s)

                                                
                                    
x
+
TestCertExpiration (246.66s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-587957 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-587957 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (40.619979952s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-587957 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-587957 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (23.847466627s)
helpers_test.go:175: Cleaning up "cert-expiration-587957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-587957
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-587957: (2.186853419s)
--- PASS: TestCertExpiration (246.66s)

                                                
                                    
x
+
TestDockerFlags (42.41s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-683972 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-683972 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (37.912343182s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-683972 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-683972 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-683972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-683972
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-683972: (3.674176541s)
--- PASS: TestDockerFlags (42.41s)

                                                
                                    
x
+
TestForceSystemdFlag (45.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-670961 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-670961 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.187792926s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-670961 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-670961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-670961
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-670961: (2.388152713s)
--- PASS: TestForceSystemdFlag (45.19s)

                                                
                                    
x
+
TestForceSystemdEnv (45.86s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-621585 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0916 19:52:53.756467  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-621585 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.202270811s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-621585 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-621585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-621585
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-621585: (2.192116691s)
--- PASS: TestForceSystemdEnv (45.86s)

                                                
                                    
x
+
TestErrorSpam/setup (32.57s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-414915 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-414915 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-414915 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-414915 --driver=docker  --container-runtime=docker: (32.569448366s)
--- PASS: TestErrorSpam/setup (32.57s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-414915 --log_dir /tmp/nospam-414915 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-414915 --log_dir /tmp/nospam-414915 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-414915 --log_dir /tmp/nospam-414915 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-414915 --log_dir /tmp/nospam-414915 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-414915 --log_dir /tmp/nospam-414915 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-414915 --log_dir /tmp/nospam-414915 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-414915 --log_dir /tmp/nospam-414915 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-414915 --log_dir /tmp/nospam-414915 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-414915 --log_dir /tmp/nospam-414915 pause
--- PASS: TestErrorSpam/pause (1.42s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-414915 --log_dir /tmp/nospam-414915 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-414915 --log_dir /tmp/nospam-414915 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-414915 --log_dir /tmp/nospam-414915 unpause
--- PASS: TestErrorSpam/unpause (1.59s)

                                                
                                    
x
+
TestErrorSpam/stop (11.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-414915 --log_dir /tmp/nospam-414915 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-414915 --log_dir /tmp/nospam-414915 stop: (10.854304243s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-414915 --log_dir /tmp/nospam-414915 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-414915 --log_dir /tmp/nospam-414915 stop
--- PASS: TestErrorSpam/stop (11.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19649-567461/.minikube/files/etc/test/nested/copy/572841/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-612705 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-612705 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m9.424949385s)
--- PASS: TestFunctional/serial/StartWithProxy (69.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.67s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-612705 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-612705 --alsologtostderr -v=8: (34.668175158s)
functional_test.go:663: soft start took 34.671389328s for "functional-612705" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.67s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-612705 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-612705 cache add registry.k8s.io/pause:3.1: (1.190527028s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-612705 cache add registry.k8s.io/pause:3.3: (1.220579833s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-612705 cache add registry.k8s.io/pause:latest: (1.075575151s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-612705 /tmp/TestFunctionalserialCacheCmdcacheadd_local2389769763/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 cache add minikube-local-cache-test:functional-612705
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 cache delete minikube-local-cache-test:functional-612705
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-612705
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-612705 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (346.244275ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 kubectl -- --context functional-612705 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-612705 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.74s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-612705 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-612705 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.743798523s)
functional_test.go:761: restart took 44.743908953s for "functional-612705" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-612705 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-612705 logs: (1.181363126s)
--- PASS: TestFunctional/serial/LogsCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 logs --file /tmp/TestFunctionalserialLogsFileCmd3740244721/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-612705 logs --file /tmp/TestFunctionalserialLogsFileCmd3740244721/001/logs.txt: (1.279938849s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.97s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-612705 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-612705
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-612705: exit status 115 (516.5608ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31946 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-612705 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-612705 delete -f testdata/invalidsvc.yaml: (1.191499482s)
--- PASS: TestFunctional/serial/InvalidService (4.97s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-612705 config get cpus: exit status 14 (70.878943ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-612705 config get cpus: exit status 14 (174.819434ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-612705 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-612705 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 616670: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-612705 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-612705 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (170.066825ms)

                                                
                                                
-- stdout --
	* [functional-612705] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-567461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-567461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 19:20:16.561355  615963 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:20:16.561557  615963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:20:16.561583  615963 out.go:358] Setting ErrFile to fd 2...
	I0916 19:20:16.561601  615963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:20:16.561899  615963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-567461/.minikube/bin
	I0916 19:20:16.562325  615963 out.go:352] Setting JSON to false
	I0916 19:20:16.563531  615963 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10955,"bootTime":1726503462,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0916 19:20:16.563641  615963 start.go:139] virtualization:  
	I0916 19:20:16.566377  615963 out.go:177] * [functional-612705] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0916 19:20:16.568836  615963 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 19:20:16.568955  615963 notify.go:220] Checking for updates...
	I0916 19:20:16.571743  615963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 19:20:16.573693  615963 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-567461/kubeconfig
	I0916 19:20:16.575056  615963 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-567461/.minikube
	I0916 19:20:16.576305  615963 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0916 19:20:16.577729  615963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 19:20:16.579852  615963 config.go:182] Loaded profile config "functional-612705": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 19:20:16.580379  615963 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 19:20:16.604807  615963 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 19:20:16.604954  615963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:20:16.673127  615963 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-16 19:20:16.661256803 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:20:16.673332  615963 docker.go:318] overlay module found
	I0916 19:20:16.674720  615963 out.go:177] * Using the docker driver based on existing profile
	I0916 19:20:16.676210  615963 start.go:297] selected driver: docker
	I0916 19:20:16.676246  615963 start.go:901] validating driver "docker" against &{Name:functional-612705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-612705 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:20:16.676361  615963 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 19:20:16.678363  615963 out.go:201] 
	W0916 19:20:16.679617  615963 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0916 19:20:16.680973  615963 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-612705 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-612705 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-612705 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (204.777699ms)

                                                
                                                
-- stdout --
	* [functional-612705] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-567461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-567461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 19:20:18.047517  616296 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:20:18.047706  616296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:20:18.047728  616296 out.go:358] Setting ErrFile to fd 2...
	I0916 19:20:18.047734  616296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:20:18.049088  616296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-567461/.minikube/bin
	I0916 19:20:18.049621  616296 out.go:352] Setting JSON to false
	I0916 19:20:18.051224  616296 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10956,"bootTime":1726503462,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0916 19:20:18.051322  616296 start.go:139] virtualization:  
	I0916 19:20:18.053469  616296 out.go:177] * [functional-612705] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0916 19:20:18.055271  616296 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 19:20:18.055322  616296 notify.go:220] Checking for updates...
	I0916 19:20:18.059797  616296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 19:20:18.061295  616296 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-567461/kubeconfig
	I0916 19:20:18.062970  616296 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-567461/.minikube
	I0916 19:20:18.064428  616296 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0916 19:20:18.065656  616296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 19:20:18.067484  616296 config.go:182] Loaded profile config "functional-612705": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 19:20:18.068134  616296 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 19:20:18.103476  616296 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 19:20:18.103675  616296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:20:18.175148  616296 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-16 19:20:18.165176348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:20:18.175259  616296 docker.go:318] overlay module found
	I0916 19:20:18.177344  616296 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0916 19:20:18.178852  616296 start.go:297] selected driver: docker
	I0916 19:20:18.178872  616296 start.go:901] validating driver "docker" against &{Name:functional-612705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-612705 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:20:18.178978  616296 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 19:20:18.181169  616296 out.go:201] 
	W0916 19:20:18.182485  616296 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 19:20:18.184103  616296 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-612705 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-612705 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-h2l6m" [d91882fb-6052-4b91-b31e-989ba073fda3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0916 19:20:00.940824  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-connect-65d86f57f4-h2l6m" [d91882fb-6052-4b91-b31e-989ba073fda3] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003369992s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30483
functional_test.go:1675: http://192.168.49.2:30483: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-h2l6m

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30483
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.82s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5117aeb5-87c7-406f-9feb-fa40f9cd767e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004447029s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-612705 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-612705 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-612705 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-612705 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0cc8e076-756f-4e4f-9ca0-b42dbe7eb3e4] Pending
E0916 19:19:53.257298  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [0cc8e076-756f-4e4f-9ca0-b42dbe7eb3e4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0cc8e076-756f-4e4f-9ca0-b42dbe7eb3e4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003660949s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-612705 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-612705 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-612705 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f8050126-38f7-4310-a83e-06040efd3fe9] Pending
helpers_test.go:344: "sp-pod" [f8050126-38f7-4310-a83e-06040efd3fe9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f8050126-38f7-4310-a83e-06040efd3fe9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004965009s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-612705 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.68s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh -n functional-612705 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 cp functional-612705:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd251153671/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh -n functional-612705 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh -n functional-612705 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/572841/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "sudo cat /etc/test/nested/copy/572841/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/572841.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "sudo cat /etc/ssl/certs/572841.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/572841.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "sudo cat /usr/share/ca-certificates/572841.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/5728412.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "sudo cat /etc/ssl/certs/5728412.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/5728412.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "sudo cat /usr/share/ca-certificates/5728412.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-612705 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-612705 ssh "sudo systemctl is-active crio": exit status 1 (434.757865ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-612705 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-612705 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-612705 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 611271: os: process already finished
helpers_test.go:508: unable to kill pid 611111: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-612705 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-612705 version -o=json --components: (1.018984049s)
--- PASS: TestFunctional/parallel/Version/components (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-612705 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-612705 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d9dd7b2d-4340-4709-b194-d82a1398f53a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d9dd7b2d-4340-4709-b194-d82a1398f53a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004988913s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-612705 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-612705
docker.io/kicbase/echo-server:functional-612705
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-612705 image ls --format short --alsologtostderr:
I0916 19:20:22.450532  617134 out.go:345] Setting OutFile to fd 1 ...
I0916 19:20:22.450964  617134 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:20:22.450998  617134 out.go:358] Setting ErrFile to fd 2...
I0916 19:20:22.451019  617134 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:20:22.451332  617134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-567461/.minikube/bin
I0916 19:20:22.452146  617134 config.go:182] Loaded profile config "functional-612705": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 19:20:22.452344  617134 config.go:182] Loaded profile config "functional-612705": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 19:20:22.452927  617134 cli_runner.go:164] Run: docker container inspect functional-612705 --format={{.State.Status}}
I0916 19:20:22.474360  617134 ssh_runner.go:195] Run: systemctl --version
I0916 19:20:22.474413  617134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-612705
I0916 19:20:22.501748  617134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/functional-612705/id_rsa Username:docker}
I0916 19:20:22.604138  617134 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-612705 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| localhost/my-image                          | functional-612705 | 0bbcf3005ad0e | 1.41MB |
| docker.io/library/minikube-local-cache-test | functional-612705 | e5ee20cb8e646 | 30B    |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/kicbase/echo-server               | functional-612705 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-612705 image ls --format table --alsologtostderr:
I0916 19:20:26.876088  617457 out.go:345] Setting OutFile to fd 1 ...
I0916 19:20:26.876207  617457 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:20:26.876213  617457 out.go:358] Setting ErrFile to fd 2...
I0916 19:20:26.876217  617457 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:20:26.876480  617457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-567461/.minikube/bin
I0916 19:20:26.877122  617457 config.go:182] Loaded profile config "functional-612705": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 19:20:26.877241  617457 config.go:182] Loaded profile config "functional-612705": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 19:20:26.877712  617457 cli_runner.go:164] Run: docker container inspect functional-612705 --format={{.State.Status}}
I0916 19:20:26.897563  617457 ssh_runner.go:195] Run: systemctl --version
I0916 19:20:26.897625  617457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-612705
I0916 19:20:26.917354  617457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/functional-612705/id_rsa Username:docker}
I0916 19:20:27.015598  617457 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-612705 image ls --format json --alsologtostderr:
[{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-612705"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c59
01d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"e5ee20cb8e64697d23c63e15f85c1bcbfc9b65e9debef78fc049fffed4ec7053","repoDigests":[],"repoTags":["docker.io/library
/minikube-local-cache-test:functional-612705"],"size":"30"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"0bbcf3005ad0ec2df707f5cef23ff043ff2d1fc5aa01eeeabc3ae3dd658ed4a0","repoDigests":[],"repoTags":["localhost/my-image:functional-612705"],"size":"1410000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-612705 image ls --format json --alsologtostderr:
I0916 19:20:26.599861  617422 out.go:345] Setting OutFile to fd 1 ...
I0916 19:20:26.600039  617422 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:20:26.600046  617422 out.go:358] Setting ErrFile to fd 2...
I0916 19:20:26.600051  617422 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:20:26.600716  617422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-567461/.minikube/bin
I0916 19:20:26.601861  617422 config.go:182] Loaded profile config "functional-612705": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 19:20:26.601998  617422 config.go:182] Loaded profile config "functional-612705": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 19:20:26.602733  617422 cli_runner.go:164] Run: docker container inspect functional-612705 --format={{.State.Status}}
I0916 19:20:26.625026  617422 ssh_runner.go:195] Run: systemctl --version
I0916 19:20:26.625087  617422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-612705
I0916 19:20:26.653398  617422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/functional-612705/id_rsa Username:docker}
I0916 19:20:26.756136  617422 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-612705 image ls --format yaml --alsologtostderr:
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: e5ee20cb8e64697d23c63e15f85c1bcbfc9b65e9debef78fc049fffed4ec7053
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-612705
size: "30"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-612705
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-612705 image ls --format yaml --alsologtostderr:
I0916 19:20:22.698923  617167 out.go:345] Setting OutFile to fd 1 ...
I0916 19:20:22.699048  617167 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:20:22.699059  617167 out.go:358] Setting ErrFile to fd 2...
I0916 19:20:22.699066  617167 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:20:22.699687  617167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-567461/.minikube/bin
I0916 19:20:22.700497  617167 config.go:182] Loaded profile config "functional-612705": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 19:20:22.700695  617167 config.go:182] Loaded profile config "functional-612705": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 19:20:22.703955  617167 cli_runner.go:164] Run: docker container inspect functional-612705 --format={{.State.Status}}
I0916 19:20:22.725700  617167 ssh_runner.go:195] Run: systemctl --version
I0916 19:20:22.725768  617167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-612705
I0916 19:20:22.745637  617167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/functional-612705/id_rsa Username:docker}
I0916 19:20:22.844358  617167 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-612705 ssh pgrep buildkitd: exit status 1 (304.522351ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image build -t localhost/my-image:functional-612705 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-612705 image build -t localhost/my-image:functional-612705 testdata/build --alsologtostderr: (3.114185704s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-612705 image build -t localhost/my-image:functional-612705 testdata/build --alsologtostderr:
I0916 19:20:23.237065  617257 out.go:345] Setting OutFile to fd 1 ...
I0916 19:20:23.237683  617257 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:20:23.237699  617257 out.go:358] Setting ErrFile to fd 2...
I0916 19:20:23.237708  617257 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 19:20:23.238017  617257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-567461/.minikube/bin
I0916 19:20:23.238767  617257 config.go:182] Loaded profile config "functional-612705": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 19:20:23.240058  617257 config.go:182] Loaded profile config "functional-612705": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 19:20:23.240722  617257 cli_runner.go:164] Run: docker container inspect functional-612705 --format={{.State.Status}}
I0916 19:20:23.259951  617257 ssh_runner.go:195] Run: systemctl --version
I0916 19:20:23.260018  617257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-612705
I0916 19:20:23.277720  617257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/functional-612705/id_rsa Username:docker}
I0916 19:20:23.376154  617257 build_images.go:161] Building image from path: /tmp/build.3881381998.tar
I0916 19:20:23.376271  617257 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0916 19:20:23.388798  617257 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3881381998.tar
I0916 19:20:23.394005  617257 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3881381998.tar: stat -c "%s %y" /var/lib/minikube/build/build.3881381998.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3881381998.tar': No such file or directory
I0916 19:20:23.394096  617257 ssh_runner.go:362] scp /tmp/build.3881381998.tar --> /var/lib/minikube/build/build.3881381998.tar (3072 bytes)
I0916 19:20:23.452288  617257 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3881381998
I0916 19:20:23.464667  617257 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3881381998 -xf /var/lib/minikube/build/build.3881381998.tar
I0916 19:20:23.477580  617257 docker.go:360] Building image: /var/lib/minikube/build/build.3881381998
I0916 19:20:23.477734  617257 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-612705 /var/lib/minikube/build/build.3881381998
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:0bbcf3005ad0ec2df707f5cef23ff043ff2d1fc5aa01eeeabc3ae3dd658ed4a0 done
#8 naming to localhost/my-image:functional-612705 done
#8 DONE 0.0s
I0916 19:20:26.266132  617257 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-612705 /var/lib/minikube/build/build.3881381998: (2.788356745s)
I0916 19:20:26.266207  617257 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3881381998
I0916 19:20:26.276873  617257 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3881381998.tar
I0916 19:20:26.288269  617257 build_images.go:217] Built localhost/my-image:functional-612705 from /tmp/build.3881381998.tar
I0916 19:20:26.288303  617257 build_images.go:133] succeeded building to: functional-612705
I0916 19:20:26.288309  617257 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-612705
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image load --daemon kicbase/echo-server:functional-612705 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image load --daemon kicbase/echo-server:functional-612705 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-612705
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image load --daemon kicbase/echo-server:functional-612705 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image save kicbase/echo-server:functional-612705 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image rm kicbase/echo-server:functional-612705 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-612705
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 image save --daemon kicbase/echo-server:functional-612705 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-612705
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-612705 docker-env) && out/minikube-linux-arm64 status -p functional-612705"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-612705 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 update-context --alsologtostderr -v=2
2024/09/16 19:20:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-612705 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.66.167 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-612705 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-612705 /tmp/TestFunctionalparallelMountCmdany-port1589862624/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726514386233988858" to /tmp/TestFunctionalparallelMountCmdany-port1589862624/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726514386233988858" to /tmp/TestFunctionalparallelMountCmdany-port1589862624/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726514386233988858" to /tmp/TestFunctionalparallelMountCmdany-port1589862624/001/test-1726514386233988858
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-612705 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (431.980873ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 16 19:19 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 16 19:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 16 19:19 test-1726514386233988858
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh cat /mount-9p/test-1726514386233988858
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-612705 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7aa46b5a-d26f-4de1-a4a8-76291cc8cac1] Pending
helpers_test.go:344: "busybox-mount" [7aa46b5a-d26f-4de1-a4a8-76291cc8cac1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0916 19:19:50.677709  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:19:50.684711  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:19:50.696085  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:19:50.717451  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:19:50.758966  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:19:50.840426  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:19:51.011924  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:19:51.333469  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [7aa46b5a-d26f-4de1-a4a8-76291cc8cac1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0916 19:19:51.975462  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [7aa46b5a-d26f-4de1-a4a8-76291cc8cac1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003872692s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-612705 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-612705 /tmp/TestFunctionalparallelMountCmdany-port1589862624/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-612705 /tmp/TestFunctionalparallelMountCmdspecific-port3240207438/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-612705 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (607.575867ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0916 19:19:55.819192  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-612705 /tmp/TestFunctionalparallelMountCmdspecific-port3240207438/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-612705 ssh "sudo umount -f /mount-9p": exit status 1 (403.260799ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-612705 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-612705 /tmp/TestFunctionalparallelMountCmdspecific-port3240207438/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-612705 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2108673224/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-612705 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2108673224/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-612705 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2108673224/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-612705 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-612705 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2108673224/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-612705 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2108673224/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-612705 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2108673224/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-612705 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-612705 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-7l8m5" [75b0fd6f-5c9d-4b5e-add8-226a3c512909] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0916 19:20:11.182915  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-64b4f8f9ff-7l8m5" [75b0fd6f-5c9d-4b5e-add8-226a3c512909] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004806502s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "336.950821ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "56.701438ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "333.673145ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "60.572349ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 service list -o json
functional_test.go:1494: Took "606.564215ms" to run "out/minikube-linux-arm64 -p functional-612705 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30840
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-612705 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30840
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-612705
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-612705
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-612705
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (132.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-390856 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0916 19:20:31.665004  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:21:12.628414  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:22:34.550296  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-390856 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m11.509144826s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (132.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (54.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-390856 -- rollout status deployment/busybox: (4.991236154s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- exec busybox-7dff88458-b86mx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- exec busybox-7dff88458-q4wrh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- exec busybox-7dff88458-qm2ts -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- exec busybox-7dff88458-b86mx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- exec busybox-7dff88458-q4wrh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- exec busybox-7dff88458-qm2ts -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- exec busybox-7dff88458-b86mx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- exec busybox-7dff88458-q4wrh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- exec busybox-7dff88458-qm2ts -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (54.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- exec busybox-7dff88458-b86mx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- exec busybox-7dff88458-b86mx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- exec busybox-7dff88458-q4wrh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- exec busybox-7dff88458-q4wrh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- exec busybox-7dff88458-qm2ts -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-390856 -- exec busybox-7dff88458-qm2ts -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (28.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-390856 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-390856 -v=7 --alsologtostderr: (27.391334651s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-390856 status -v=7 --alsologtostderr: (1.348554706s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (28.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-390856 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-390856 status --output json -v=7 --alsologtostderr: (1.081146603s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp testdata/cp-test.txt ha-390856:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp ha-390856:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688115044/001/cp-test_ha-390856.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp ha-390856:/home/docker/cp-test.txt ha-390856-m02:/home/docker/cp-test_ha-390856_ha-390856-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m02 "sudo cat /home/docker/cp-test_ha-390856_ha-390856-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp ha-390856:/home/docker/cp-test.txt ha-390856-m03:/home/docker/cp-test_ha-390856_ha-390856-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m03 "sudo cat /home/docker/cp-test_ha-390856_ha-390856-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp ha-390856:/home/docker/cp-test.txt ha-390856-m04:/home/docker/cp-test_ha-390856_ha-390856-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m04 "sudo cat /home/docker/cp-test_ha-390856_ha-390856-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp testdata/cp-test.txt ha-390856-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp ha-390856-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688115044/001/cp-test_ha-390856-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp ha-390856-m02:/home/docker/cp-test.txt ha-390856:/home/docker/cp-test_ha-390856-m02_ha-390856.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856 "sudo cat /home/docker/cp-test_ha-390856-m02_ha-390856.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp ha-390856-m02:/home/docker/cp-test.txt ha-390856-m03:/home/docker/cp-test_ha-390856-m02_ha-390856-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m03 "sudo cat /home/docker/cp-test_ha-390856-m02_ha-390856-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp ha-390856-m02:/home/docker/cp-test.txt ha-390856-m04:/home/docker/cp-test_ha-390856-m02_ha-390856-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m04 "sudo cat /home/docker/cp-test_ha-390856-m02_ha-390856-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp testdata/cp-test.txt ha-390856-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp ha-390856-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688115044/001/cp-test_ha-390856-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp ha-390856-m03:/home/docker/cp-test.txt ha-390856:/home/docker/cp-test_ha-390856-m03_ha-390856.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856 "sudo cat /home/docker/cp-test_ha-390856-m03_ha-390856.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp ha-390856-m03:/home/docker/cp-test.txt ha-390856-m02:/home/docker/cp-test_ha-390856-m03_ha-390856-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m02 "sudo cat /home/docker/cp-test_ha-390856-m03_ha-390856-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp ha-390856-m03:/home/docker/cp-test.txt ha-390856-m04:/home/docker/cp-test_ha-390856-m03_ha-390856-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m04 "sudo cat /home/docker/cp-test_ha-390856-m03_ha-390856-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp testdata/cp-test.txt ha-390856-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp ha-390856-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688115044/001/cp-test_ha-390856-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp ha-390856-m04:/home/docker/cp-test.txt ha-390856:/home/docker/cp-test_ha-390856-m04_ha-390856.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856 "sudo cat /home/docker/cp-test_ha-390856-m04_ha-390856.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp ha-390856-m04:/home/docker/cp-test.txt ha-390856-m02:/home/docker/cp-test_ha-390856-m04_ha-390856-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m02 "sudo cat /home/docker/cp-test_ha-390856-m04_ha-390856-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 cp ha-390856-m04:/home/docker/cp-test.txt ha-390856-m03:/home/docker/cp-test_ha-390856-m04_ha-390856-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 ssh -n ha-390856-m03 "sudo cat /home/docker/cp-test_ha-390856-m04_ha-390856-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 node stop m02 -v=7 --alsologtostderr
E0916 19:24:35.139023  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:24:35.146163  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:24:35.157610  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:24:35.179586  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:24:35.221189  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:24:35.302685  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:24:35.464292  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:24:35.786157  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:24:36.428081  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:24:37.709534  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:24:40.271212  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-390856 node stop m02 -v=7 --alsologtostderr: (11.072261037s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-390856 status -v=7 --alsologtostderr: exit status 7 (808.287996ms)

                                                
                                                
-- stdout --
	ha-390856
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-390856-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-390856-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-390856-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 19:24:42.520860  640381 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:24:42.521009  640381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:24:42.521022  640381 out.go:358] Setting ErrFile to fd 2...
	I0916 19:24:42.521030  640381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:24:42.521324  640381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-567461/.minikube/bin
	I0916 19:24:42.521570  640381 out.go:352] Setting JSON to false
	I0916 19:24:42.521631  640381 mustload.go:65] Loading cluster: ha-390856
	I0916 19:24:42.521793  640381 notify.go:220] Checking for updates...
	I0916 19:24:42.522194  640381 config.go:182] Loaded profile config "ha-390856": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 19:24:42.522217  640381 status.go:255] checking status of ha-390856 ...
	I0916 19:24:42.523014  640381 cli_runner.go:164] Run: docker container inspect ha-390856 --format={{.State.Status}}
	I0916 19:24:42.548695  640381 status.go:330] ha-390856 host status = "Running" (err=<nil>)
	I0916 19:24:42.548722  640381 host.go:66] Checking if "ha-390856" exists ...
	I0916 19:24:42.549049  640381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-390856
	I0916 19:24:42.594132  640381 host.go:66] Checking if "ha-390856" exists ...
	I0916 19:24:42.594461  640381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 19:24:42.594507  640381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-390856
	I0916 19:24:42.615105  640381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33514 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/ha-390856/id_rsa Username:docker}
	I0916 19:24:42.712369  640381 ssh_runner.go:195] Run: systemctl --version
	I0916 19:24:42.717478  640381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 19:24:42.729945  640381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:24:42.803234  640381 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-16 19:24:42.791426008 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:24:42.803853  640381 kubeconfig.go:125] found "ha-390856" server: "https://192.168.49.254:8443"
	I0916 19:24:42.803891  640381 api_server.go:166] Checking apiserver status ...
	I0916 19:24:42.803938  640381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 19:24:42.817485  640381 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2269/cgroup
	I0916 19:24:42.830730  640381 api_server.go:182] apiserver freezer: "2:freezer:/docker/1e73f8c906f3abae5f2d5680a9fe45b932291d1ee33976061f1b1d68ad4af50e/kubepods/burstable/podc96dca49c055a8568a75abf210f4b9a6/388a8a763cdbaa4be938fd3ad76d63bb47c54f37191ba734c1a7cf39bb5f14b6"
	I0916 19:24:42.830910  640381 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1e73f8c906f3abae5f2d5680a9fe45b932291d1ee33976061f1b1d68ad4af50e/kubepods/burstable/podc96dca49c055a8568a75abf210f4b9a6/388a8a763cdbaa4be938fd3ad76d63bb47c54f37191ba734c1a7cf39bb5f14b6/freezer.state
	I0916 19:24:42.840701  640381 api_server.go:204] freezer state: "THAWED"
	I0916 19:24:42.840732  640381 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0916 19:24:42.850804  640381 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0916 19:24:42.850994  640381 status.go:422] ha-390856 apiserver status = Running (err=<nil>)
	I0916 19:24:42.851007  640381 status.go:257] ha-390856 status: &{Name:ha-390856 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 19:24:42.851069  640381 status.go:255] checking status of ha-390856-m02 ...
	I0916 19:24:42.851479  640381 cli_runner.go:164] Run: docker container inspect ha-390856-m02 --format={{.State.Status}}
	I0916 19:24:42.871627  640381 status.go:330] ha-390856-m02 host status = "Stopped" (err=<nil>)
	I0916 19:24:42.871653  640381 status.go:343] host is not running, skipping remaining checks
	I0916 19:24:42.871661  640381 status.go:257] ha-390856-m02 status: &{Name:ha-390856-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 19:24:42.871689  640381 status.go:255] checking status of ha-390856-m03 ...
	I0916 19:24:42.872029  640381 cli_runner.go:164] Run: docker container inspect ha-390856-m03 --format={{.State.Status}}
	I0916 19:24:42.890052  640381 status.go:330] ha-390856-m03 host status = "Running" (err=<nil>)
	I0916 19:24:42.890081  640381 host.go:66] Checking if "ha-390856-m03" exists ...
	I0916 19:24:42.890507  640381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-390856-m03
	I0916 19:24:42.907530  640381 host.go:66] Checking if "ha-390856-m03" exists ...
	I0916 19:24:42.907872  640381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 19:24:42.907925  640381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-390856-m03
	I0916 19:24:42.925704  640381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/ha-390856-m03/id_rsa Username:docker}
	I0916 19:24:43.024562  640381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 19:24:43.037485  640381 kubeconfig.go:125] found "ha-390856" server: "https://192.168.49.254:8443"
	I0916 19:24:43.037518  640381 api_server.go:166] Checking apiserver status ...
	I0916 19:24:43.037560  640381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 19:24:43.049873  640381 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2198/cgroup
	I0916 19:24:43.059909  640381 api_server.go:182] apiserver freezer: "2:freezer:/docker/3064cae9c41d203d3a26aa7fd4749ce1cfcb0a458a48463b615f389903eeb70b/kubepods/burstable/pod834de0a004d9802554479a68f32d5150/5f76f2edd3333305732f81328e4b8f952848299a6aacc80414f107779692d704"
	I0916 19:24:43.059989  640381 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3064cae9c41d203d3a26aa7fd4749ce1cfcb0a458a48463b615f389903eeb70b/kubepods/burstable/pod834de0a004d9802554479a68f32d5150/5f76f2edd3333305732f81328e4b8f952848299a6aacc80414f107779692d704/freezer.state
	I0916 19:24:43.069042  640381 api_server.go:204] freezer state: "THAWED"
	I0916 19:24:43.069075  640381 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0916 19:24:43.077112  640381 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0916 19:24:43.077141  640381 status.go:422] ha-390856-m03 apiserver status = Running (err=<nil>)
	I0916 19:24:43.077151  640381 status.go:257] ha-390856-m03 status: &{Name:ha-390856-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 19:24:43.077178  640381 status.go:255] checking status of ha-390856-m04 ...
	I0916 19:24:43.077488  640381 cli_runner.go:164] Run: docker container inspect ha-390856-m04 --format={{.State.Status}}
	I0916 19:24:43.095547  640381 status.go:330] ha-390856-m04 host status = "Running" (err=<nil>)
	I0916 19:24:43.095584  640381 host.go:66] Checking if "ha-390856-m04" exists ...
	I0916 19:24:43.096019  640381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-390856-m04
	I0916 19:24:43.118736  640381 host.go:66] Checking if "ha-390856-m04" exists ...
	I0916 19:24:43.120632  640381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 19:24:43.120684  640381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-390856-m04
	I0916 19:24:43.146600  640381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/ha-390856-m04/id_rsa Username:docker}
	I0916 19:24:43.244436  640381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 19:24:43.256794  640381 status.go:257] ha-390856-m04 status: &{Name:ha-390856-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (72.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 node start m02 -v=7 --alsologtostderr
E0916 19:24:45.392496  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:24:50.676825  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:24:55.634707  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:25:16.116290  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:25:18.391690  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-390856 node start m02 -v=7 --alsologtostderr: (1m11.366250408s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-390856 status -v=7 --alsologtostderr: (1.348744738s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (72.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0916 19:25:57.078512  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (4.462368075s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (251.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-390856 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-390856 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-390856 -v=7 --alsologtostderr: (34.585360457s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-390856 --wait=true -v=7 --alsologtostderr
E0916 19:27:19.000470  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:29:35.138169  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:29:50.676998  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:30:02.842113  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-390856 --wait=true -v=7 --alsologtostderr: (3m36.759144676s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-390856
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (251.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-390856 node delete m03 -v=7 --alsologtostderr: (10.394444887s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-390856 stop -v=7 --alsologtostderr: (33.08611946s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-390856 status -v=7 --alsologtostderr: exit status 7 (110.841874ms)

                                                
                                                
-- stdout --
	ha-390856
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-390856-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-390856-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 19:30:57.783306  667999 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:30:57.783444  667999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:30:57.783455  667999 out.go:358] Setting ErrFile to fd 2...
	I0916 19:30:57.783461  667999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:30:57.783725  667999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-567461/.minikube/bin
	I0916 19:30:57.783963  667999 out.go:352] Setting JSON to false
	I0916 19:30:57.783999  667999 mustload.go:65] Loading cluster: ha-390856
	I0916 19:30:57.784094  667999 notify.go:220] Checking for updates...
	I0916 19:30:57.784447  667999 config.go:182] Loaded profile config "ha-390856": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 19:30:57.784461  667999 status.go:255] checking status of ha-390856 ...
	I0916 19:30:57.785071  667999 cli_runner.go:164] Run: docker container inspect ha-390856 --format={{.State.Status}}
	I0916 19:30:57.804461  667999 status.go:330] ha-390856 host status = "Stopped" (err=<nil>)
	I0916 19:30:57.804485  667999 status.go:343] host is not running, skipping remaining checks
	I0916 19:30:57.804492  667999 status.go:257] ha-390856 status: &{Name:ha-390856 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 19:30:57.804538  667999 status.go:255] checking status of ha-390856-m02 ...
	I0916 19:30:57.804862  667999 cli_runner.go:164] Run: docker container inspect ha-390856-m02 --format={{.State.Status}}
	I0916 19:30:57.823294  667999 status.go:330] ha-390856-m02 host status = "Stopped" (err=<nil>)
	I0916 19:30:57.823314  667999 status.go:343] host is not running, skipping remaining checks
	I0916 19:30:57.823321  667999 status.go:257] ha-390856-m02 status: &{Name:ha-390856-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 19:30:57.823340  667999 status.go:255] checking status of ha-390856-m04 ...
	I0916 19:30:57.823660  667999 cli_runner.go:164] Run: docker container inspect ha-390856-m04 --format={{.State.Status}}
	I0916 19:30:57.846260  667999 status.go:330] ha-390856-m04 host status = "Stopped" (err=<nil>)
	I0916 19:30:57.846286  667999 status.go:343] host is not running, skipping remaining checks
	I0916 19:30:57.846294  667999 status.go:257] ha-390856-m04 status: &{Name:ha-390856-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (101.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-390856 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-390856 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m40.200059744s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (101.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-390856 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-390856 --control-plane -v=7 --alsologtostderr: (43.835102619s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-390856 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-390856 status -v=7 --alsologtostderr: (1.094770877s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (32.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-998712 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-998712 --driver=docker  --container-runtime=docker: (32.024819104s)
--- PASS: TestImageBuild/serial/Setup (32.02s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-998712
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-998712: (1.910451174s)
--- PASS: TestImageBuild/serial/NormalBuild (1.91s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.19s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-998712
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-998712: (1.18907938s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.19s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-998712
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.88s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-998712
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-909662 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0916 19:34:35.138991  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:34:50.677276  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-909662 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (44.903087686s)
--- PASS: TestJSONOutput/start/Command (44.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-909662 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-909662 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-909662 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-909662 --output=json --user=testUser: (5.830899237s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-477437 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-477437 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (86.473204ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ed882280-7eb5-4ea3-ae38-80d0ba6e81d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-477437] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4e2fc3f-b99f-4f9f-9f49-b24d85d3c34c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19649"}}
	{"specversion":"1.0","id":"208077fa-deed-48f1-8bbc-49029690d29e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0dc6ae41-16cd-4817-9ebb-1c4bc562f095","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19649-567461/kubeconfig"}}
	{"specversion":"1.0","id":"92088173-6e35-4ddc-9b30-491d5fce8482","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-567461/.minikube"}}
	{"specversion":"1.0","id":"595755f4-19c3-4ac6-a79b-26c8ed828684","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"167fd919-3182-48ac-8fbb-92e9e9b20a71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2cf5b229-3bc7-48a5-9036-8d509de45566","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-477437" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-477437
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-402991 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-402991 --network=: (32.080134915s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-402991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-402991
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-402991: (2.148453957s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.25s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.73s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-902943 --network=bridge
E0916 19:36:13.755001  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-902943 --network=bridge: (35.734669156s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-902943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-902943
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-902943: (1.966594756s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.73s)

                                                
                                    
x
+
TestKicExistingNetwork (32.83s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-120214 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-120214 --network=existing-network: (30.711451315s)
helpers_test.go:175: Cleaning up "existing-network-120214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-120214
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-120214: (1.951516692s)
--- PASS: TestKicExistingNetwork (32.83s)

                                                
                                    
x
+
TestKicCustomSubnet (36.51s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-128401 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-128401 --subnet=192.168.60.0/24: (34.317860536s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-128401 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-128401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-128401
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-128401: (2.160906619s)
--- PASS: TestKicCustomSubnet (36.51s)

                                                
                                    
x
+
TestKicStaticIP (36.53s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-209977 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-209977 --static-ip=192.168.200.200: (34.612490381s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-209977 ip
helpers_test.go:175: Cleaning up "static-ip-209977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-209977
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-209977: (1.762073918s)
--- PASS: TestKicStaticIP (36.53s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (70.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-138988 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-138988 --driver=docker  --container-runtime=docker: (32.428421718s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-142073 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-142073 --driver=docker  --container-runtime=docker: (32.541670145s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-138988
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-142073
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-142073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-142073
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-142073: (2.160894336s)
helpers_test.go:175: Cleaning up "first-138988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-138988
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-138988: (2.162569883s)
--- PASS: TestMinikubeProfile (70.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-182409 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-182409 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.197681262s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-182409 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-184706 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-184706 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.248459954s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-184706 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.5s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-182409 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-182409 --alsologtostderr -v=5: (1.497994021s)
--- PASS: TestMountStart/serial/DeleteFirst (1.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-184706 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-184706
E0916 19:39:35.138664  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-184706: (1.246186s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.85s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-184706
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-184706: (7.850456033s)
--- PASS: TestMountStart/serial/RestartStopped (8.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.49s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-184706 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.49s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (71.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-991240 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0916 19:39:50.677365  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-991240 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m10.722061232s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 status --alsologtostderr
E0916 19:40:58.204062  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiNode/serial/FreshStart2Nodes (71.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (36.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-991240 -- rollout status deployment/busybox: (4.107946229s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- exec busybox-7dff88458-lcg9h -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- exec busybox-7dff88458-qdk4w -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- exec busybox-7dff88458-lcg9h -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- exec busybox-7dff88458-qdk4w -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- exec busybox-7dff88458-lcg9h -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- exec busybox-7dff88458-qdk4w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (36.85s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- exec busybox-7dff88458-lcg9h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- exec busybox-7dff88458-lcg9h -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- exec busybox-7dff88458-qdk4w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-991240 -- exec busybox-7dff88458-qdk4w -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (20.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-991240 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-991240 -v 3 --alsologtostderr: (19.817584707s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (20.73s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-991240 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 cp testdata/cp-test.txt multinode-991240:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 cp multinode-991240:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile918455609/001/cp-test_multinode-991240.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 cp multinode-991240:/home/docker/cp-test.txt multinode-991240-m02:/home/docker/cp-test_multinode-991240_multinode-991240-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240-m02 "sudo cat /home/docker/cp-test_multinode-991240_multinode-991240-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 cp multinode-991240:/home/docker/cp-test.txt multinode-991240-m03:/home/docker/cp-test_multinode-991240_multinode-991240-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240-m03 "sudo cat /home/docker/cp-test_multinode-991240_multinode-991240-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 cp testdata/cp-test.txt multinode-991240-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 cp multinode-991240-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile918455609/001/cp-test_multinode-991240-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 cp multinode-991240-m02:/home/docker/cp-test.txt multinode-991240:/home/docker/cp-test_multinode-991240-m02_multinode-991240.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240 "sudo cat /home/docker/cp-test_multinode-991240-m02_multinode-991240.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 cp multinode-991240-m02:/home/docker/cp-test.txt multinode-991240-m03:/home/docker/cp-test_multinode-991240-m02_multinode-991240-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240-m03 "sudo cat /home/docker/cp-test_multinode-991240-m02_multinode-991240-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 cp testdata/cp-test.txt multinode-991240-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 cp multinode-991240-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile918455609/001/cp-test_multinode-991240-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 cp multinode-991240-m03:/home/docker/cp-test.txt multinode-991240:/home/docker/cp-test_multinode-991240-m03_multinode-991240.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240 "sudo cat /home/docker/cp-test_multinode-991240-m03_multinode-991240.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 cp multinode-991240-m03:/home/docker/cp-test.txt multinode-991240-m02:/home/docker/cp-test_multinode-991240-m03_multinode-991240-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 ssh -n multinode-991240-m02 "sudo cat /home/docker/cp-test_multinode-991240-m03_multinode-991240-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-991240 node stop m03: (1.261896073s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-991240 status: exit status 7 (538.020941ms)

                                                
                                                
-- stdout --
	multinode-991240
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-991240-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-991240-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-991240 status --alsologtostderr: exit status 7 (535.545205ms)

                                                
                                                
-- stdout --
	multinode-991240
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-991240-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-991240-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 19:42:10.632159  742470 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:42:10.632368  742470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:42:10.632396  742470 out.go:358] Setting ErrFile to fd 2...
	I0916 19:42:10.632402  742470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:42:10.632847  742470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-567461/.minikube/bin
	I0916 19:42:10.633139  742470 out.go:352] Setting JSON to false
	I0916 19:42:10.633169  742470 mustload.go:65] Loading cluster: multinode-991240
	I0916 19:42:10.633812  742470 config.go:182] Loaded profile config "multinode-991240": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 19:42:10.633828  742470 status.go:255] checking status of multinode-991240 ...
	I0916 19:42:10.634559  742470 cli_runner.go:164] Run: docker container inspect multinode-991240 --format={{.State.Status}}
	I0916 19:42:10.637354  742470 notify.go:220] Checking for updates...
	I0916 19:42:10.654894  742470 status.go:330] multinode-991240 host status = "Running" (err=<nil>)
	I0916 19:42:10.654921  742470 host.go:66] Checking if "multinode-991240" exists ...
	I0916 19:42:10.655235  742470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-991240
	I0916 19:42:10.673652  742470 host.go:66] Checking if "multinode-991240" exists ...
	I0916 19:42:10.673974  742470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 19:42:10.674032  742470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-991240
	I0916 19:42:10.697669  742470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33639 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/multinode-991240/id_rsa Username:docker}
	I0916 19:42:10.796053  742470 ssh_runner.go:195] Run: systemctl --version
	I0916 19:42:10.800430  742470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 19:42:10.812196  742470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 19:42:10.873689  742470 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-16 19:42:10.863749721 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0916 19:42:10.874282  742470 kubeconfig.go:125] found "multinode-991240" server: "https://192.168.67.2:8443"
	I0916 19:42:10.874322  742470 api_server.go:166] Checking apiserver status ...
	I0916 19:42:10.874377  742470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 19:42:10.886721  742470 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2249/cgroup
	I0916 19:42:10.896392  742470 api_server.go:182] apiserver freezer: "2:freezer:/docker/80aacf053c8ad55d0e4cf76382f2afbdd64ce3c1e78183ceebf4ffbd38e24a5e/kubepods/burstable/pod516ab2db34ac52c981fd42b90969aebc/1c0d7f7f55ae8d2403cd734cfb035ff922faa45f36c7bbc7c2ec512591cc9282"
	I0916 19:42:10.896464  742470 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/80aacf053c8ad55d0e4cf76382f2afbdd64ce3c1e78183ceebf4ffbd38e24a5e/kubepods/burstable/pod516ab2db34ac52c981fd42b90969aebc/1c0d7f7f55ae8d2403cd734cfb035ff922faa45f36c7bbc7c2ec512591cc9282/freezer.state
	I0916 19:42:10.905110  742470 api_server.go:204] freezer state: "THAWED"
	I0916 19:42:10.905137  742470 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0916 19:42:10.913480  742470 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0916 19:42:10.913512  742470 status.go:422] multinode-991240 apiserver status = Running (err=<nil>)
	I0916 19:42:10.913524  742470 status.go:257] multinode-991240 status: &{Name:multinode-991240 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 19:42:10.913541  742470 status.go:255] checking status of multinode-991240-m02 ...
	I0916 19:42:10.913862  742470 cli_runner.go:164] Run: docker container inspect multinode-991240-m02 --format={{.State.Status}}
	I0916 19:42:10.932908  742470 status.go:330] multinode-991240-m02 host status = "Running" (err=<nil>)
	I0916 19:42:10.932933  742470 host.go:66] Checking if "multinode-991240-m02" exists ...
	I0916 19:42:10.933250  742470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-991240-m02
	I0916 19:42:10.950145  742470 host.go:66] Checking if "multinode-991240-m02" exists ...
	I0916 19:42:10.950457  742470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 19:42:10.950494  742470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-991240-m02
	I0916 19:42:10.969206  742470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33644 SSHKeyPath:/home/jenkins/minikube-integration/19649-567461/.minikube/machines/multinode-991240-m02/id_rsa Username:docker}
	I0916 19:42:11.076515  742470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 19:42:11.089368  742470 status.go:257] multinode-991240-m02 status: &{Name:multinode-991240-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0916 19:42:11.089404  742470 status.go:255] checking status of multinode-991240-m03 ...
	I0916 19:42:11.089776  742470 cli_runner.go:164] Run: docker container inspect multinode-991240-m03 --format={{.State.Status}}
	I0916 19:42:11.108395  742470 status.go:330] multinode-991240-m03 host status = "Stopped" (err=<nil>)
	I0916 19:42:11.108416  742470 status.go:343] host is not running, skipping remaining checks
	I0916 19:42:11.108424  742470 status.go:257] multinode-991240-m03 status: &{Name:multinode-991240-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-991240 node start m03 -v=7 --alsologtostderr: (10.526864541s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (104.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-991240
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-991240
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-991240: (22.713150239s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-991240 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-991240 --wait=true -v=8 --alsologtostderr: (1m21.423537823s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-991240
--- PASS: TestMultiNode/serial/RestartKeepsNodes (104.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-991240 node delete m03: (5.023998641s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-991240 stop: (21.465600465s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-991240 status: exit status 7 (99.52633ms)

                                                
                                                
-- stdout --
	multinode-991240
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-991240-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-991240 status --alsologtostderr: exit status 7 (95.069698ms)

                                                
                                                
-- stdout --
	multinode-991240
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-991240-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 19:44:34.062336  756116 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:44:34.062740  756116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:44:34.062759  756116 out.go:358] Setting ErrFile to fd 2...
	I0916 19:44:34.062765  756116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:44:34.063092  756116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-567461/.minikube/bin
	I0916 19:44:34.063314  756116 out.go:352] Setting JSON to false
	I0916 19:44:34.063349  756116 mustload.go:65] Loading cluster: multinode-991240
	I0916 19:44:34.063801  756116 config.go:182] Loaded profile config "multinode-991240": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 19:44:34.063821  756116 status.go:255] checking status of multinode-991240 ...
	I0916 19:44:34.064431  756116 cli_runner.go:164] Run: docker container inspect multinode-991240 --format={{.State.Status}}
	I0916 19:44:34.064943  756116 notify.go:220] Checking for updates...
	I0916 19:44:34.083748  756116 status.go:330] multinode-991240 host status = "Stopped" (err=<nil>)
	I0916 19:44:34.083770  756116 status.go:343] host is not running, skipping remaining checks
	I0916 19:44:34.083780  756116 status.go:257] multinode-991240 status: &{Name:multinode-991240 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 19:44:34.083822  756116 status.go:255] checking status of multinode-991240-m02 ...
	I0916 19:44:34.084171  756116 cli_runner.go:164] Run: docker container inspect multinode-991240-m02 --format={{.State.Status}}
	I0916 19:44:34.102767  756116 status.go:330] multinode-991240-m02 host status = "Stopped" (err=<nil>)
	I0916 19:44:34.102789  756116 status.go:343] host is not running, skipping remaining checks
	I0916 19:44:34.102796  756116 status.go:257] multinode-991240-m02 status: &{Name:multinode-991240-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (59.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-991240 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0916 19:44:35.138659  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:44:50.677441  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-991240 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (58.886485232s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-991240 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (59.72s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-991240
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-991240-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-991240-m02 --driver=docker  --container-runtime=docker: exit status 14 (110.327557ms)

                                                
                                                
-- stdout --
	* [multinode-991240-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-567461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-567461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-991240-m02' is duplicated with machine name 'multinode-991240-m02' in profile 'multinode-991240'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-991240-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-991240-m03 --driver=docker  --container-runtime=docker: (34.069823836s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-991240
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-991240: exit status 80 (357.167945ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-991240 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-991240-m03 already exists in multinode-991240-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-991240-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-991240-m03: (2.178985716s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.80s)

                                                
                                    
x
+
TestPreload (145.87s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-632145 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-632145 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m46.409346973s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-632145 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-632145 image pull gcr.io/k8s-minikube/busybox: (2.005621452s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-632145
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-632145: (10.881698176s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-632145 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-632145 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (23.879938881s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-632145 image list
helpers_test.go:175: Cleaning up "test-preload-632145" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-632145
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-632145: (2.335589682s)
--- PASS: TestPreload (145.87s)

                                                
                                    
x
+
TestScheduledStopUnix (108.5s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-750229 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-750229 --memory=2048 --driver=docker  --container-runtime=docker: (35.184726286s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-750229 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-750229 -n scheduled-stop-750229
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-750229 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-750229 --cancel-scheduled
E0916 19:49:35.140771  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-750229 -n scheduled-stop-750229
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-750229
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-750229 --schedule 15s
E0916 19:49:50.676786  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-750229
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-750229: exit status 7 (77.078865ms)

                                                
                                                
-- stdout --
	scheduled-stop-750229
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-750229 -n scheduled-stop-750229
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-750229 -n scheduled-stop-750229: exit status 7 (75.360667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-750229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-750229
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-750229: (1.658580096s)
--- PASS: TestScheduledStopUnix (108.50s)

                                                
                                    
x
+
TestSkaffold (121.53s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe739608461 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-576460 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-576460 --memory=2600 --driver=docker  --container-runtime=docker: (31.815600233s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe739608461 run --minikube-profile skaffold-576460 --kube-context skaffold-576460 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe739608461 run --minikube-profile skaffold-576460 --kube-context skaffold-576460 --status-check=true --port-forward=false --interactive=false: (1m13.989991969s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6d7b8674bc-gf9sf" [0e365926-ad22-4326-8e74-6c2f8e0318e8] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004205719s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-85947b89f9-jhbth" [b1e61a87-4dda-4379-bdf1-ea656bd321a2] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003755106s
helpers_test.go:175: Cleaning up "skaffold-576460" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-576460
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-576460: (3.082019335s)
--- PASS: TestSkaffold (121.53s)

                                                
                                    
x
+
TestInsufficientStorage (11.65s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-283321 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-283321 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.261291556s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e332c0d1-3568-430a-8beb-5a6fcc4d3597","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-283321] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0124c391-cebe-4a42-b9da-cd503079a58b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19649"}}
	{"specversion":"1.0","id":"1d183623-6b43-46cb-ba73-05557f1d0c04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3583d685-ce0e-400c-899b-7bbdd94da117","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19649-567461/kubeconfig"}}
	{"specversion":"1.0","id":"03b51eef-3985-4e2e-95d6-91f32a4fb746","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-567461/.minikube"}}
	{"specversion":"1.0","id":"702e2d11-b84b-480c-9eb3-7bc1d7e79467","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e7f5fe17-42bc-441c-8a49-196e8f3f11f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d0286b45-e5a3-4eeb-b335-4e351217dd7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8c8b05ca-fb5f-4008-a5bf-4fed9788c832","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2cef4740-d043-413f-9ffd-10573ccf6482","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e91c47b0-445a-48bc-a57b-9d1acaa4142f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"29e6524f-18fc-4b22-a119-0d99f002fc64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-283321\" primary control-plane node in \"insufficient-storage-283321\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"43072da2-ad96-4229-8041-73bd49b17b8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726481311-19649 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"291eb693-8c0d-4150-a1a8-a0dc5e2fdfc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a6c34e11-77b7-425d-b53c-2fad48305a9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-283321 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-283321 --output=json --layout=cluster: exit status 7 (358.701625ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-283321","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-283321","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 19:52:40.073315  790601 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-283321" does not appear in /home/jenkins/minikube-integration/19649-567461/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-283321 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-283321 --output=json --layout=cluster: exit status 7 (309.615221ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-283321","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-283321","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 19:52:40.386127  790665 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-283321" does not appear in /home/jenkins/minikube-integration/19649-567461/kubeconfig
	E0916 19:52:40.396811  790665 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/insufficient-storage-283321/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-283321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-283321
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-283321: (1.723472247s)
--- PASS: TestInsufficientStorage (11.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (132.29s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.388332715 start -p running-upgrade-631762 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.388332715 start -p running-upgrade-631762 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m20.628648964s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-631762 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0916 19:57:16.363686  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:57:16.370089  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:57:16.381397  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:57:16.402802  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:57:16.444187  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:57:16.525608  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:57:16.687104  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:57:17.008761  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:57:17.651017  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:57:18.932640  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-631762 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (48.443402715s)
helpers_test.go:175: Cleaning up "running-upgrade-631762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-631762
E0916 19:57:21.494949  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-631762: (2.208182082s)
--- PASS: TestRunningBinaryUpgrade (132.29s)

                                                
                                    
x
+
TestKubernetesUpgrade (374.73s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-945418 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0916 19:58:38.302731  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-945418 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m2.825802253s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-945418
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-945418: (2.28514745s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-945418 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-945418 status --format={{.Host}}: exit status 7 (131.025196ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-945418 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0916 19:59:35.139106  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:59:50.676727  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:00:00.224241  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-945418 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m40.482469242s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-945418 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-945418 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-945418 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (121.773133ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-945418] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-567461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-567461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-945418
	    minikube start -p kubernetes-upgrade-945418 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9454182 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-945418 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-945418 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-945418 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (26.275281768s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-945418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-945418
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-945418: (2.486994496s)
--- PASS: TestKubernetesUpgrade (374.73s)

                                                
                                    
x
+
TestMissingContainerUpgrade (120.42s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.125053977 start -p missing-upgrade-873826 --memory=2200 --driver=docker  --container-runtime=docker
E0916 19:57:26.616537  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:57:36.858616  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:57:38.206045  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 19:57:57.340720  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.125053977 start -p missing-upgrade-873826 --memory=2200 --driver=docker  --container-runtime=docker: (37.058746654s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-873826
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-873826: (10.477762207s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-873826
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-873826 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-873826 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m9.443048545s)
helpers_test.go:175: Cleaning up "missing-upgrade-873826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-873826
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-873826: (2.393205046s)
--- PASS: TestMissingContainerUpgrade (120.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (94.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.834026217 start -p stopped-upgrade-866355 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.834026217 start -p stopped-upgrade-866355 --memory=2200 --vm-driver=docker  --container-runtime=docker: (50.572197724s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.834026217 -p stopped-upgrade-866355 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.834026217 -p stopped-upgrade-866355 stop: (10.955887455s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-866355 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-866355 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.863073761s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (94.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-866355
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-866355: (1.724347562s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.73s)

                                                
                                    
x
+
TestPause/serial/Start (80.56s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-254656 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0916 20:02:16.363450  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-254656 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m20.556033803s)
--- PASS: TestPause/serial/Start (80.56s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.48s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-254656 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0916 20:02:44.066598  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-254656 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.468520597s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.48s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-254656 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-254656 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-254656 --output=json --layout=cluster: exit status 2 (339.585717ms)

                                                
                                                
-- stdout --
	{"Name":"pause-254656","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-254656","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.55s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-254656 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.55s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.17s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-254656 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-254656 --alsologtostderr -v=5: (1.166051888s)
--- PASS: TestPause/serial/PauseAgain (1.17s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.26s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-254656 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-254656 --alsologtostderr -v=5: (2.25561081s)
--- PASS: TestPause/serial/DeletePaused (2.26s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.84s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (14.786111355s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-254656
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-254656: exit status 1 (15.874213ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-254656: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-635766 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-635766 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (100.456146ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-635766] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-567461/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-567461/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-635766 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-635766 --driver=docker  --container-runtime=docker: (35.943660273s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-635766 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-635766 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-635766 --no-kubernetes --driver=docker  --container-runtime=docker: (17.451003454s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-635766 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-635766 status -o json: exit status 2 (431.032258ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-635766","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-635766
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-635766: (2.066171325s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-635766 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-635766 --no-kubernetes --driver=docker  --container-runtime=docker: (8.915649133s)
--- PASS: TestNoKubernetes/serial/Start (8.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-635766 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-635766 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.966829ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (17.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
E0916 20:04:35.138968  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (15.223441748s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (2.107855558s)
--- PASS: TestNoKubernetes/serial/ProfileList (17.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m23.570934844s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-635766
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-635766: (1.292711879s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-635766 --driver=docker  --container-runtime=docker
E0916 20:04:50.677498  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-635766 --driver=docker  --container-runtime=docker: (9.590611839s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-635766 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-635766 "sudo systemctl is-active --quiet service kubelet": exit status 1 (346.400939ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (74.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m14.941421642s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (74.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-206945 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-206945 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6245d" [4a969af3-1c24-40f7-8722-dc1dc9d48472] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6245d" [4a969af3-1c24-40f7-8722-dc1dc9d48472] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004622791s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-htmhj" [cf688a83-7394-4c67-8721-41cf0a2e9c77] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007285395s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-206945 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-206945 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-g7zzr" [06181509-3651-43e4-bfb5-caea7c27c56f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-g7zzr" [06181509-3651-43e4-bfb5-caea7c27c56f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.009316558s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-206945 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-206945 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (92.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m32.349721423s)
--- PASS: TestNetworkPlugins/group/calico/Start (92.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0916 20:07:16.362963  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m3.538183161s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-206945 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-206945 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-j9qdw" [67997de9-3ee6-4a2b-b75e-8ea3cc2ab85e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-j9qdw" [67997de9-3ee6-4a2b-b75e-8ea3cc2ab85e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.00377838s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-206945 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8dbq6" [d49540f1-9976-436a-8aaf-73297f29b940] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005830112s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-206945 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-206945 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vvxdx" [aaf7cf6a-90d1-428d-8a1e-a74cd34606f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vvxdx" [aaf7cf6a-90d1-428d-8a1e-a74cd34606f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.00532378s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-206945 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (85.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m25.548853826s)
--- PASS: TestNetworkPlugins/group/false/Start (85.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0916 20:09:33.757756  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:09:35.138231  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:09:50.677522  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m24.438097165s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-206945 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-206945 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x4kxx" [d9ec294c-df6d-4652-bec2-fa0cfbf6839d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-x4kxx" [d9ec294c-df6d-4652-bec2-fa0cfbf6839d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.003740756s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-206945 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-206945 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-206945 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5c5lt" [13fd9db8-be80-4594-901e-1761bb33b416] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5c5lt" [13fd9db8-be80-4594-901e-1761bb33b416] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004010114s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m3.422561114s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-206945 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0916 20:11:04.804046  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:04.810849  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:04.822246  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:04.843613  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:04.885014  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:04.966428  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:05.127909  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:05.449972  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:06.091465  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:07.373053  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:08.604724  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:08.611096  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:08.622476  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:08.643829  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:08.685180  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:08.766552  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:08.929932  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:09.252421  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:09.893949  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:09.935304  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:11.175618  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:13.737490  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:15.056925  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:18.858785  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:25.298636  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:11:29.100670  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m28.184908443s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-xqxtx" [2d88d7ce-6c07-4878-9ab8-9c7da0eef22a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005709005s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-206945 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-206945 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t4pcn" [b4b8119c-c9b1-406e-a689-1925908c0e0b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0916 20:11:45.781016  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-t4pcn" [b4b8119c-c9b1-406e-a689-1925908c0e0b] Running
E0916 20:11:49.582039  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004276004s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-206945 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (76.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0916 20:12:26.742731  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-206945 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m16.262282536s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (76.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-206945 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-206945 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wsjkd" [b33c174f-5935-48ee-9a49-557fd10b85f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0916 20:12:30.543396  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-wsjkd" [b33c174f-5935-48ee-9a49-557fd10b85f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.00399471s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-206945 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (148.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-559163 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0916 20:13:12.108322  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/calico-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:12.114873  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/calico-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:12.126363  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/calico-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:12.147916  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/calico-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:12.189366  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/calico-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:12.270844  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/calico-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:12.432691  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/calico-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:12.754263  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/calico-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:13.395563  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/calico-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:14.677328  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/calico-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:16.588082  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/custom-flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:17.239009  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/calico-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:22.360744  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/calico-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:32.602441  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/calico-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-559163 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m28.636320032s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (148.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-206945 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-206945 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d7mrt" [9cfc1146-c6ae-40f8-8b20-3876bba7207c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0916 20:13:37.070240  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/custom-flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:13:39.428752  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-d7mrt" [9cfc1146-c6ae-40f8-8b20-3876bba7207c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.004816847s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-206945 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-206945 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.34s)
E0916 20:26:03.146167  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:26:03.780235  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/default-k8s-diff-port-593663/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:26:04.804299  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:26:08.604124  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-593663 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0916 20:14:18.036452  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/custom-flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:18.207630  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:34.045681  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/calico-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:35.138116  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:50.677562  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:59.121108  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:59.127563  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:59.138982  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:59.160374  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:59.201799  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:59.283273  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:59.444804  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:14:59.766574  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:00.414945  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:01.698108  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:04.259776  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:09.385217  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:19.627174  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-593663 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m11.894179048s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-593663 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0cd0342b-4de4-4b42-9b1c-8d2544517c01] Pending
E0916 20:15:23.601786  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:23.608299  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:23.620065  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:23.642150  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:23.684001  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:23.765470  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [0cd0342b-4de4-4b42-9b1c-8d2544517c01] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0916 20:15:23.927724  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:24.249752  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:24.892106  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:26.174289  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [0cd0342b-4de4-4b42-9b1c-8d2544517c01] Running
E0916 20:15:28.736268  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004632939s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-593663 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-593663 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-593663 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.129216573s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-593663 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-593663 --alsologtostderr -v=3
E0916 20:15:33.857972  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-593663 --alsologtostderr -v=3: (11.06473347s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-559163 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dc56c51c-0fcb-4152-9638-96cf59cc78a5] Pending
helpers_test.go:344: "busybox" [dc56c51c-0fcb-4152-9638-96cf59cc78a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dc56c51c-0fcb-4152-9638-96cf59cc78a5] Running
E0916 20:15:39.958549  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/custom-flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:40.109252  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:15:44.099913  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004054659s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-559163 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-593663 -n default-k8s-diff-port-593663
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-593663 -n default-k8s-diff-port-593663: exit status 7 (83.514205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-593663 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-593663 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-593663 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m28.86372814s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-593663 -n default-k8s-diff-port-593663
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-559163 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-559163 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.291235037s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-559163 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-559163 --alsologtostderr -v=3
E0916 20:15:55.967463  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/calico-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-559163 --alsologtostderr -v=3: (11.178664141s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-559163 -n old-k8s-version-559163
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-559163 -n old-k8s-version-559163: exit status 7 (107.748017ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-559163 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (145.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-559163 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0916 20:16:04.581178  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:04.804065  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:08.604230  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:21.071675  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:32.506067  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:36.165480  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:36.171880  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:36.183303  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:36.204794  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:36.246293  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:36.307817  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:36.328280  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:36.490339  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:36.812068  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:37.454159  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:38.735839  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:41.298038  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:45.542742  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:46.420355  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:16:56.662027  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:16.363446  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:17.143969  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:30.442998  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:30.449410  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:30.460972  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:30.482512  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:30.523955  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:30.605478  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:30.767117  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:31.088568  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:31.730189  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:33.012282  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:35.574595  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:40.696616  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:42.993814  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:50.937910  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:56.091459  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/custom-flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:17:58.106014  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:18:07.464083  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:18:11.420045  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:18:12.108114  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/calico-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-559163 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m24.848657389s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-559163 -n old-k8s-version-559163
E0916 20:18:23.800285  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/custom-flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (145.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-fw9bz" [a5fb4db9-4808-42df-9f66-a1e8bb540a88] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005249517s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-fw9bz" [a5fb4db9-4808-42df-9f66-a1e8bb540a88] Running
E0916 20:18:34.316950  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kubenet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:18:34.323529  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kubenet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:18:34.335084  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kubenet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:18:34.356652  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kubenet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:18:34.398136  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kubenet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:18:34.479596  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kubenet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:18:34.641079  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kubenet-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005239587s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-559163 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0916 20:18:34.963414  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kubenet-206945/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-559163 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-559163 --alsologtostderr -v=1
E0916 20:18:35.605466  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kubenet-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-559163 -n old-k8s-version-559163
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-559163 -n old-k8s-version-559163: exit status 2 (369.921521ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-559163 -n old-k8s-version-559163
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-559163 -n old-k8s-version-559163: exit status 2 (327.751082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-559163 --alsologtostderr -v=1
E0916 20:18:36.887288  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kubenet-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-559163 -n old-k8s-version-559163
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-559163 -n old-k8s-version-559163
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-138897 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0916 20:18:44.571948  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kubenet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:18:52.382394  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:18:54.814466  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kubenet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:19:15.295878  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kubenet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:19:20.027716  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-138897 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (46.518241182s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-138897 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7f192dee-cb1d-4463-9eb4-215c17f93131] Pending
helpers_test.go:344: "busybox" [7f192dee-cb1d-4463-9eb4-215c17f93131] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7f192dee-cb1d-4463-9eb4-215c17f93131] Running
E0916 20:19:35.138590  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/functional-612705/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00357812s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-138897 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-138897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-138897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.013309727s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-138897 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-138897 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-138897 --alsologtostderr -v=3: (11.000776401s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-138897 -n embed-certs-138897
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-138897 -n embed-certs-138897: exit status 7 (86.895396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-138897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (268.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-138897 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0916 20:19:50.677296  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:19:56.257477  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kubenet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:19:59.120881  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-138897 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m28.192682303s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-138897 -n embed-certs-138897
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (268.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rthdc" [939cdeae-fe12-4141-b0a7-a229d5ca3cc2] Running
E0916 20:20:14.303617  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004088879s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rthdc" [939cdeae-fe12-4141-b0a7-a229d5ca3cc2] Running
E0916 20:20:23.601780  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004691693s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-593663 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-593663 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-593663 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-593663 -n default-k8s-diff-port-593663
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-593663 -n default-k8s-diff-port-593663: exit status 2 (345.711325ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-593663 -n default-k8s-diff-port-593663
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-593663 -n default-k8s-diff-port-593663: exit status 2 (384.460677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-593663 --alsologtostderr -v=1
E0916 20:20:26.836076  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-593663 -n default-k8s-diff-port-593663
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-593663 -n default-k8s-diff-port-593663
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (50.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-054781 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0916 20:20:35.436380  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:20:35.442797  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:20:35.454273  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:20:35.475701  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:20:35.517129  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:20:35.603336  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:20:35.764942  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:20:36.086410  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:20:36.728038  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:20:38.010245  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:20:40.575080  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:20:45.696733  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:20:51.306132  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:20:55.938482  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:21:04.808585  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/auto-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:21:08.603746  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kindnet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:21:16.420101  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:21:18.178719  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kubenet-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-054781 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (50.64881922s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (50.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-054781 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c9a94cf3-bd3f-48a1-9985-8af86c380055] Pending
helpers_test.go:344: "busybox" [c9a94cf3-bd3f-48a1-9985-8af86c380055] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c9a94cf3-bd3f-48a1-9985-8af86c380055] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.017896364s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-054781 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-054781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-054781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.056559337s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-054781 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-054781 --alsologtostderr -v=3
E0916 20:21:36.165195  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-054781 --alsologtostderr -v=3: (11.048299995s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-054781 -n no-preload-054781
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-054781 -n no-preload-054781: exit status 7 (70.90227ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-054781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-054781 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0916 20:21:57.382167  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:22:03.869029  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:22:16.363339  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/skaffold-576460/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:22:30.442650  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:22:56.091352  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/custom-flannel-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:22:58.145186  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/bridge-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:23:12.108072  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/calico-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:23:19.304408  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:23:34.316914  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kubenet-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:24:02.020151  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/kubenet-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-054781 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m27.36519664s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-054781 -n no-preload-054781
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-nw6bd" [77ca2218-d9dd-4573-9963-3b862cca719f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003919021s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-nw6bd" [77ca2218-d9dd-4573-9963-3b862cca719f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003767782s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-138897 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-138897 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-138897 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-138897 -n embed-certs-138897
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-138897 -n embed-certs-138897: exit status 2 (342.383395ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-138897 -n embed-certs-138897
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-138897 -n embed-certs-138897: exit status 2 (365.352023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-138897 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-138897 -n embed-certs-138897
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-138897 -n embed-certs-138897
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-819160 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0916 20:24:50.677551  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:24:59.121089  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/false-206945/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-819160 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (39.252904992s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-819160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-819160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.344522654s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-819160 --alsologtostderr -v=3
E0916 20:25:22.802437  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/default-k8s-diff-port-593663/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:25:22.808995  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/default-k8s-diff-port-593663/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:25:22.820597  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/default-k8s-diff-port-593663/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:25:22.842330  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/default-k8s-diff-port-593663/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:25:22.883890  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/default-k8s-diff-port-593663/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:25:22.965477  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/default-k8s-diff-port-593663/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:25:23.127165  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/default-k8s-diff-port-593663/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:25:23.448861  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/default-k8s-diff-port-593663/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:25:23.601425  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/enable-default-cni-206945/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:25:24.090199  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/default-k8s-diff-port-593663/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:25:25.372101  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/default-k8s-diff-port-593663/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:25:27.934384  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/default-k8s-diff-port-593663/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-819160 --alsologtostderr -v=3: (11.071754857s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-819160 -n newest-cni-819160
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-819160 -n newest-cni-819160: exit status 7 (68.12575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-819160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-819160 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0916 20:25:33.056471  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/default-k8s-diff-port-593663/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:25:35.435922  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/old-k8s-version-559163/client.crt: no such file or directory" logger="UnhandledError"
E0916 20:25:43.298224  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/default-k8s-diff-port-593663/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-819160 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (19.406418291s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-819160 -n newest-cni-819160
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-819160 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-819160 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-819160 -n newest-cni-819160
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-819160 -n newest-cni-819160: exit status 2 (397.513338ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-819160 -n newest-cni-819160
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-819160 -n newest-cni-819160: exit status 2 (349.385506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-819160 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-819160 -n newest-cni-819160
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-819160 -n newest-cni-819160
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-25rds" [795be93b-ac10-4340-a077-a6ef8003f2bc] Running
E0916 20:26:13.760356  572841 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-567461/.minikube/profiles/addons-723934/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004116691s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-25rds" [795be93b-ac10-4340-a077-a6ef8003f2bc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003541631s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-054781 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-054781 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-054781 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-054781 -n no-preload-054781
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-054781 -n no-preload-054781: exit status 2 (336.73021ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-054781 -n no-preload-054781
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-054781 -n no-preload-054781: exit status 2 (334.437662ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-054781 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-054781 -n no-preload-054781
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-054781 -n no-preload-054781
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.87s)

                                                
                                    

Test skip (24/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-931427 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-931427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-931427
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-206945 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-206945

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-206945

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-206945

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-206945

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-206945

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-206945

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-206945

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-206945

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-206945

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-206945

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-206945

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-206945" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-206945" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-206945" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-206945" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-206945" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-206945" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-206945" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-206945" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-206945

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-206945

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-206945" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-206945" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-206945

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-206945

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-206945" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-206945" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-206945" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-206945" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-206945" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-206945

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-206945" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206945"

                                                
                                                
----------------------- debugLogs end: cilium-206945 [took: 5.702615526s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-206945" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-206945
--- SKIP: TestNetworkPlugins/group/cilium (5.95s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-854278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-854278
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard