Test Report: Docker_Linux_docker_arm64 19644

                    
                      c0eea096ace35e11d6c690a668e6718dc1bec60e:2024-09-15:36219
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 74.99
x
+
TestAddons/parallel/Registry (74.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.786405ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-7gzvx" [1a2130f7-6cbe-4a8b-bea3-e3e4436003d2] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004011068s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-htg6g" [53474271-c9f2-4050-bf68-df5e1935aa85] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006764871s
addons_test.go:342: (dbg) Run:  kubectl --context addons-837740 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-837740 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-837740 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.126333937s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-837740 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-837740 ip
2024/09/15 06:43:34 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-837740 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-837740
helpers_test.go:235: (dbg) docker inspect addons-837740:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b3ba6fbaf9bcbe4c51474e9317b1a1a35bd3431e43c7c98aa9e38f90b412ee90",
	        "Created": "2024-09-15T06:30:20.754532867Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-15T06:30:20.930143613Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1b71fa87733590eb4674b16f6945626ae533f3af37066893e3fd70eb9476268",
	        "ResolvConfPath": "/var/lib/docker/containers/b3ba6fbaf9bcbe4c51474e9317b1a1a35bd3431e43c7c98aa9e38f90b412ee90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b3ba6fbaf9bcbe4c51474e9317b1a1a35bd3431e43c7c98aa9e38f90b412ee90/hostname",
	        "HostsPath": "/var/lib/docker/containers/b3ba6fbaf9bcbe4c51474e9317b1a1a35bd3431e43c7c98aa9e38f90b412ee90/hosts",
	        "LogPath": "/var/lib/docker/containers/b3ba6fbaf9bcbe4c51474e9317b1a1a35bd3431e43c7c98aa9e38f90b412ee90/b3ba6fbaf9bcbe4c51474e9317b1a1a35bd3431e43c7c98aa9e38f90b412ee90-json.log",
	        "Name": "/addons-837740",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-837740:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-837740",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f5a1423fbd345ba60fa79f36094d887d07b6e5fe2cfab70a131882c20b327005-init/diff:/var/lib/docker/overlay2/a44563b42d4442f369c0c7152703f9a3fe2e4fcbab25a6b8f520f3ba6cd0cdaf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f5a1423fbd345ba60fa79f36094d887d07b6e5fe2cfab70a131882c20b327005/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f5a1423fbd345ba60fa79f36094d887d07b6e5fe2cfab70a131882c20b327005/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f5a1423fbd345ba60fa79f36094d887d07b6e5fe2cfab70a131882c20b327005/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-837740",
	                "Source": "/var/lib/docker/volumes/addons-837740/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-837740",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-837740",
	                "name.minikube.sigs.k8s.io": "addons-837740",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4fdff80516785e368972eb44dce6aab88731e1e9932522c37146ca661e167557",
	            "SandboxKey": "/var/run/docker/netns/4fdff8051678",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-837740": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "3c2646c3e2640b3b16e2bdca21d65eb739c40c2a638d9e84c3615750ebd4fc28",
	                    "EndpointID": "d1995260dbcefbe0765216f1fab559d993dd49f3d904b91c0727c39758debdaa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-837740",
	                        "b3ba6fbaf9bc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-837740 -n addons-837740
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-837740 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-837740 logs -n 25: (1.531392739s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-221568   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | -p download-only-221568              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| delete  | -p download-only-221568              | download-only-221568   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| start   | -o=json --download-only              | download-only-157916   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | -p download-only-157916              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| delete  | -p download-only-157916              | download-only-157916   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| delete  | -p download-only-221568              | download-only-221568   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| delete  | -p download-only-157916              | download-only-157916   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| start   | --download-only -p                   | download-docker-771311 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | download-docker-771311               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-771311            | download-docker-771311 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| start   | --download-only -p                   | binary-mirror-730073   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | binary-mirror-730073                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35331               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-730073              | binary-mirror-730073   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| addons  | disable dashboard -p                 | addons-837740          | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | addons-837740                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-837740          | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | addons-837740                        |                        |         |         |                     |                     |
	| start   | -p addons-837740 --wait=true         | addons-837740          | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:33 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-837740 addons disable         | addons-837740          | jenkins | v1.34.0 | 15 Sep 24 06:34 UTC | 15 Sep 24 06:34 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-837740 addons                 | addons-837740          | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-837740 addons                 | addons-837740          | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-837740 addons                 | addons-837740          | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-837740          | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
	|         | addons-837740                        |                        |         |         |                     |                     |
	| ip      | addons-837740 ip                     | addons-837740          | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
	| addons  | addons-837740 addons disable         | addons-837740          | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:29:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:29:55.490756    8422 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:29:55.490925    8422 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:55.490952    8422 out.go:358] Setting ErrFile to fd 2...
	I0915 06:29:55.490971    8422 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:55.491251    8422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2359/.minikube/bin
	I0915 06:29:55.491730    8422 out.go:352] Setting JSON to false
	I0915 06:29:55.492494    8422 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":747,"bootTime":1726381048,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0915 06:29:55.492593    8422 start.go:139] virtualization:  
	I0915 06:29:55.495273    8422 out.go:177] * [addons-837740] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0915 06:29:55.497782    8422 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:29:55.497906    8422 notify.go:220] Checking for updates...
	I0915 06:29:55.502029    8422 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:29:55.504321    8422 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-2359/kubeconfig
	I0915 06:29:55.506391    8422 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2359/.minikube
	I0915 06:29:55.508477    8422 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0915 06:29:55.510878    8422 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:29:55.513296    8422 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:29:55.533774    8422 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:29:55.533903    8422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:55.595604    8422 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:29:55.586000671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:29:55.595718    8422 docker.go:318] overlay module found
	I0915 06:29:55.598109    8422 out.go:177] * Using the docker driver based on user configuration
	I0915 06:29:55.600180    8422 start.go:297] selected driver: docker
	I0915 06:29:55.600196    8422 start.go:901] validating driver "docker" against <nil>
	I0915 06:29:55.600210    8422 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:29:55.600878    8422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:55.651426    8422 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:29:55.642528615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:29:55.651665    8422 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:29:55.651893    8422 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:29:55.653727    8422 out.go:177] * Using Docker driver with root privileges
	I0915 06:29:55.655832    8422 cni.go:84] Creating CNI manager for ""
	I0915 06:29:55.655909    8422 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 06:29:55.655921    8422 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 06:29:55.656005    8422 start.go:340] cluster config:
	{Name:addons-837740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-837740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:29:55.658264    8422 out.go:177] * Starting "addons-837740" primary control-plane node in "addons-837740" cluster
	I0915 06:29:55.659987    8422 cache.go:121] Beginning downloading kic base image for docker with docker
	I0915 06:29:55.662210    8422 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0915 06:29:55.664265    8422 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 06:29:55.664281    8422 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 06:29:55.664316    8422 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-2359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 06:29:55.664332    8422 cache.go:56] Caching tarball of preloaded images
	I0915 06:29:55.664407    8422 preload.go:172] Found /home/jenkins/minikube-integration/19644-2359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 06:29:55.664417    8422 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 06:29:55.664782    8422 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/config.json ...
	I0915 06:29:55.664802    8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/config.json: {Name:mk1fe7961cb83ebea802ec66b791f26a5822ae6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:29:55.679375    8422 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:29:55.679481    8422 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 06:29:55.679513    8422 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0915 06:29:55.679518    8422 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0915 06:29:55.679526    8422 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 06:29:55.679531    8422 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0915 06:30:13.336178    8422 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0915 06:30:13.336221    8422 cache.go:194] Successfully downloaded all kic artifacts
	I0915 06:30:13.336267    8422 start.go:360] acquireMachinesLock for addons-837740: {Name:mk477b3475122614ef47a52333416900132c8763 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:30:13.336381    8422 start.go:364] duration metric: took 92.136µs to acquireMachinesLock for "addons-837740"
	I0915 06:30:13.336412    8422 start.go:93] Provisioning new machine with config: &{Name:addons-837740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-837740 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 06:30:13.336495    8422 start.go:125] createHost starting for "" (driver="docker")
	I0915 06:30:13.339008    8422 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0915 06:30:13.339278    8422 start.go:159] libmachine.API.Create for "addons-837740" (driver="docker")
	I0915 06:30:13.339313    8422 client.go:168] LocalClient.Create starting
	I0915 06:30:13.339442    8422 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca.pem
	I0915 06:30:14.329172    8422 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/cert.pem
	I0915 06:30:14.605385    8422 cli_runner.go:164] Run: docker network inspect addons-837740 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0915 06:30:14.621548    8422 cli_runner.go:211] docker network inspect addons-837740 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0915 06:30:14.621631    8422 network_create.go:284] running [docker network inspect addons-837740] to gather additional debugging logs...
	I0915 06:30:14.621652    8422 cli_runner.go:164] Run: docker network inspect addons-837740
	W0915 06:30:14.636610    8422 cli_runner.go:211] docker network inspect addons-837740 returned with exit code 1
	I0915 06:30:14.636638    8422 network_create.go:287] error running [docker network inspect addons-837740]: docker network inspect addons-837740: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-837740 not found
	I0915 06:30:14.636651    8422 network_create.go:289] output of [docker network inspect addons-837740]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-837740 not found
	
	** /stderr **
	I0915 06:30:14.636752    8422 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:30:14.657796    8422 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000406a20}
	I0915 06:30:14.657841    8422 network_create.go:124] attempt to create docker network addons-837740 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0915 06:30:14.657906    8422 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-837740 addons-837740
	I0915 06:30:14.729518    8422 network_create.go:108] docker network addons-837740 192.168.49.0/24 created
	I0915 06:30:14.729549    8422 kic.go:121] calculated static IP "192.168.49.2" for the "addons-837740" container
	I0915 06:30:14.729623    8422 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0915 06:30:14.744460    8422 cli_runner.go:164] Run: docker volume create addons-837740 --label name.minikube.sigs.k8s.io=addons-837740 --label created_by.minikube.sigs.k8s.io=true
	I0915 06:30:14.761453    8422 oci.go:103] Successfully created a docker volume addons-837740
	I0915 06:30:14.761545    8422 cli_runner.go:164] Run: docker run --rm --name addons-837740-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-837740 --entrypoint /usr/bin/test -v addons-837740:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0915 06:30:16.975199    8422 cli_runner.go:217] Completed: docker run --rm --name addons-837740-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-837740 --entrypoint /usr/bin/test -v addons-837740:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (2.213593066s)
	I0915 06:30:16.975230    8422 oci.go:107] Successfully prepared a docker volume addons-837740
	I0915 06:30:16.975263    8422 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 06:30:16.975283    8422 kic.go:194] Starting extracting preloaded images to volume ...
	I0915 06:30:16.975349    8422 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-2359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-837740:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0915 06:30:20.683211    8422 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-2359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-837740:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.707824928s)
	I0915 06:30:20.683241    8422 kic.go:203] duration metric: took 3.707955143s to extract preloaded images to volume ...
	W0915 06:30:20.683410    8422 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0915 06:30:20.683528    8422 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0915 06:30:20.739982    8422 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-837740 --name addons-837740 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-837740 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-837740 --network addons-837740 --ip 192.168.49.2 --volume addons-837740:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0915 06:30:21.107611    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Running}}
	I0915 06:30:21.128498    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:21.157201    8422 cli_runner.go:164] Run: docker exec addons-837740 stat /var/lib/dpkg/alternatives/iptables
	I0915 06:30:21.226626    8422 oci.go:144] the created container "addons-837740" has a running status.
	I0915 06:30:21.226662    8422 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa...
	I0915 06:30:22.260322    8422 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0915 06:30:22.283288    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:22.299737    8422 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0915 06:30:22.299759    8422 kic_runner.go:114] Args: [docker exec --privileged addons-837740 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0915 06:30:22.354684    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:22.371829    8422 machine.go:93] provisionDockerMachine start ...
	I0915 06:30:22.371916    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:22.389178    8422 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:22.389440    8422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:30:22.389450    8422 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 06:30:22.525325    8422 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-837740
	
	I0915 06:30:22.525350    8422 ubuntu.go:169] provisioning hostname "addons-837740"
	I0915 06:30:22.525415    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:22.543043    8422 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:22.543291    8422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:30:22.543317    8422 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-837740 && echo "addons-837740" | sudo tee /etc/hostname
	I0915 06:30:22.689867    8422 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-837740
	
	I0915 06:30:22.690040    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:22.712795    8422 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:22.713037    8422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:30:22.713060    8422 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-837740' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-837740/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-837740' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 06:30:22.850288    8422 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:30:22.850381    8422 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19644-2359/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-2359/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-2359/.minikube}
	I0915 06:30:22.850437    8422 ubuntu.go:177] setting up certificates
	I0915 06:30:22.850468    8422 provision.go:84] configureAuth start
	I0915 06:30:22.850585    8422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-837740
	I0915 06:30:22.868830    8422 provision.go:143] copyHostCerts
	I0915 06:30:22.868931    8422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-2359/.minikube/ca.pem (1078 bytes)
	I0915 06:30:22.869116    8422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-2359/.minikube/cert.pem (1123 bytes)
	I0915 06:30:22.869202    8422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-2359/.minikube/key.pem (1675 bytes)
	I0915 06:30:22.869278    8422 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-2359/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca-key.pem org=jenkins.addons-837740 san=[127.0.0.1 192.168.49.2 addons-837740 localhost minikube]
	I0915 06:30:23.867372    8422 provision.go:177] copyRemoteCerts
	I0915 06:30:23.867442    8422 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 06:30:23.867484    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:23.883832    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:23.979057    8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 06:30:24.003561    8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 06:30:24.036101    8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 06:30:24.061152    8422 provision.go:87] duration metric: took 1.21064733s to configureAuth
	I0915 06:30:24.061179    8422 ubuntu.go:193] setting minikube options for container-runtime
	I0915 06:30:24.061396    8422 config.go:182] Loaded profile config "addons-837740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:30:24.061458    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:24.080077    8422 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:24.080333    8422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:30:24.080349    8422 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 06:30:24.218509    8422 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0915 06:30:24.218535    8422 ubuntu.go:71] root file system type: overlay
	I0915 06:30:24.218645    8422 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 06:30:24.218713    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:24.235829    8422 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:24.236074    8422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:30:24.236155    8422 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 06:30:24.385696    8422 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 06:30:24.385784    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:24.402347    8422 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:24.402592    8422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:30:24.402618    8422 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 06:30:25.204963    8422 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:36.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-15 06:30:24.378891711 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0915 06:30:25.205000    8422 machine.go:96] duration metric: took 2.833152719s to provisionDockerMachine
	I0915 06:30:25.205028    8422 client.go:171] duration metric: took 11.865687574s to LocalClient.Create
	I0915 06:30:25.205055    8422 start.go:167] duration metric: took 11.865778003s to libmachine.API.Create "addons-837740"
	I0915 06:30:25.205068    8422 start.go:293] postStartSetup for "addons-837740" (driver="docker")
	I0915 06:30:25.205078    8422 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 06:30:25.205164    8422 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 06:30:25.205232    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:25.222948    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:25.318714    8422 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 06:30:25.322843    8422 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 06:30:25.322877    8422 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 06:30:25.322887    8422 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 06:30:25.322897    8422 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0915 06:30:25.322908    8422 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-2359/.minikube/addons for local assets ...
	I0915 06:30:25.322976    8422 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-2359/.minikube/files for local assets ...
	I0915 06:30:25.323000    8422 start.go:296] duration metric: took 117.924264ms for postStartSetup
	I0915 06:30:25.323306    8422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-837740
	I0915 06:30:25.339743    8422 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/config.json ...
	I0915 06:30:25.340027    8422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 06:30:25.340077    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:25.356976    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:25.451112    8422 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0915 06:30:25.455762    8422 start.go:128] duration metric: took 12.119251282s to createHost
	I0915 06:30:25.455788    8422 start.go:83] releasing machines lock for "addons-837740", held for 12.119392066s
	I0915 06:30:25.455883    8422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-837740
	I0915 06:30:25.472878    8422 ssh_runner.go:195] Run: cat /version.json
	I0915 06:30:25.472934    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:25.473180    8422 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 06:30:25.473237    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:25.492536    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:25.500015    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:25.585540    8422 ssh_runner.go:195] Run: systemctl --version
	I0915 06:30:25.718987    8422 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0915 06:30:25.724484    8422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0915 06:30:25.750616    8422 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0915 06:30:25.750739    8422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:30:25.779289    8422 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0915 06:30:25.779314    8422 start.go:495] detecting cgroup driver to use...
	I0915 06:30:25.779353    8422 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 06:30:25.779452    8422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:30:25.795977    8422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0915 06:30:25.805603    8422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0915 06:30:25.815646    8422 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0915 06:30:25.815731    8422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0915 06:30:25.825642    8422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 06:30:25.835733    8422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0915 06:30:25.845574    8422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 06:30:25.855412    8422 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 06:30:25.864314    8422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0915 06:30:25.874274    8422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0915 06:30:25.883966    8422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0915 06:30:25.893916    8422 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 06:30:25.902984    8422 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 06:30:25.911278    8422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:25.992853    8422 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0915 06:30:26.108450    8422 start.go:495] detecting cgroup driver to use...
	I0915 06:30:26.108538    8422 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 06:30:26.108625    8422 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0915 06:30:26.126937    8422 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0915 06:30:26.127050    8422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0915 06:30:26.142151    8422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:30:26.161819    8422 ssh_runner.go:195] Run: which cri-dockerd
	I0915 06:30:26.166732    8422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0915 06:30:26.176588    8422 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0915 06:30:26.197066    8422 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0915 06:30:26.298468    8422 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0915 06:30:26.393901    8422 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0915 06:30:26.394120    8422 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0915 06:30:26.413032    8422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:26.511583    8422 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0915 06:30:26.774737    8422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0915 06:30:26.787088    8422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 06:30:26.798955    8422 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0915 06:30:26.884577    8422 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0915 06:30:26.983569    8422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:27.072596    8422 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0915 06:30:27.088033    8422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 06:30:27.099721    8422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:27.189239    8422 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0915 06:30:27.256707    8422 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0915 06:30:27.256860    8422 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0915 06:30:27.260580    8422 start.go:563] Will wait 60s for crictl version
	I0915 06:30:27.260686    8422 ssh_runner.go:195] Run: which crictl
	I0915 06:30:27.264645    8422 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 06:30:27.303474    8422 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0915 06:30:27.303594    8422 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 06:30:27.326570    8422 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 06:30:27.353452    8422 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0915 06:30:27.353563    8422 cli_runner.go:164] Run: docker network inspect addons-837740 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:30:27.369313    8422 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0915 06:30:27.372894    8422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:30:27.383894    8422 kubeadm.go:883] updating cluster {Name:addons-837740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-837740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 06:30:27.384015    8422 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 06:30:27.384078    8422 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 06:30:27.402434    8422 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0915 06:30:27.402457    8422 docker.go:615] Images already preloaded, skipping extraction
	I0915 06:30:27.402526    8422 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 06:30:27.418663    8422 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0915 06:30:27.418687    8422 cache_images.go:84] Images are preloaded, skipping loading
	I0915 06:30:27.418697    8422 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0915 06:30:27.418793    8422 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-837740 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-837740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 06:30:27.418860    8422 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0915 06:30:27.464154    8422 cni.go:84] Creating CNI manager for ""
	I0915 06:30:27.464182    8422 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 06:30:27.464192    8422 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 06:30:27.464218    8422 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-837740 NodeName:addons-837740 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 06:30:27.464359    8422 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-837740"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 06:30:27.464461    8422 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 06:30:27.473282    8422 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 06:30:27.473353    8422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 06:30:27.482067    8422 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0915 06:30:27.499918    8422 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 06:30:27.517511    8422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0915 06:30:27.535263    8422 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0915 06:30:27.538536    8422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:30:27.549202    8422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:27.634233    8422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:30:27.649209    8422 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740 for IP: 192.168.49.2
	I0915 06:30:27.649228    8422 certs.go:194] generating shared ca certs ...
	I0915 06:30:27.649244    8422 certs.go:226] acquiring lock for ca certs: {Name:mk13c71d6895f2d850a77bc195b18d377b1ebab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:27.649371    8422 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-2359/.minikube/ca.key
	I0915 06:30:27.908855    8422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2359/.minikube/ca.crt ...
	I0915 06:30:27.908884    8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/ca.crt: {Name:mk3b0689801412b44fa166e8fdbf24d56dce9b53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:27.909112    8422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2359/.minikube/ca.key ...
	I0915 06:30:27.909128    8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/ca.key: {Name:mk83d56d5dc3987cdf10455f164b84411abafa05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:27.909242    8422 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-2359/.minikube/proxy-client-ca.key
	I0915 06:30:28.687516    8422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2359/.minikube/proxy-client-ca.crt ...
	I0915 06:30:28.687549    8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/proxy-client-ca.crt: {Name:mk55025023dfb8fd9a7f55d023f6c0ea9adcc0b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:28.687735    8422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2359/.minikube/proxy-client-ca.key ...
	I0915 06:30:28.687748    8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/proxy-client-ca.key: {Name:mk123f0d53fa1bac4f2d6191863a97da19cc0845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:28.687826    8422 certs.go:256] generating profile certs ...
	I0915 06:30:28.687883    8422 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.key
	I0915 06:30:28.687894    8422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt with IP's: []
	I0915 06:30:28.839851    8422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt ...
	I0915 06:30:28.839886    8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: {Name:mke9ce8ea39d7af3cb4d7a78a390c92cbe920c41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:28.840083    8422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.key ...
	I0915 06:30:28.840096    8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.key: {Name:mk6dc285a1be0c8296b45a1eeeed6c7936967204 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:28.840173    8422 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.key.559c3868
	I0915 06:30:28.840198    8422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.crt.559c3868 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0915 06:30:30.217736    8422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.crt.559c3868 ...
	I0915 06:30:30.217776    8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.crt.559c3868: {Name:mk41712f3624b73d5ebed9a84d068bbcb9634185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:30.218012    8422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.key.559c3868 ...
	I0915 06:30:30.218031    8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.key.559c3868: {Name:mk58d2a8cf2b714c2c289d85d02b81730638e260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:30.218129    8422 certs.go:381] copying /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.crt.559c3868 -> /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.crt
	I0915 06:30:30.218215    8422 certs.go:385] copying /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.key.559c3868 -> /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.key
	I0915 06:30:30.218277    8422 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/proxy-client.key
	I0915 06:30:30.218299    8422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/proxy-client.crt with IP's: []
	I0915 06:30:30.639685    8422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/proxy-client.crt ...
	I0915 06:30:30.639716    8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/proxy-client.crt: {Name:mkf6acb1dccda4a096cbf1dfcd5f2db6356b76e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:30.639901    8422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/proxy-client.key ...
	I0915 06:30:30.639915    8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/proxy-client.key: {Name:mk6cdca20081c5d4d5edca310f6cda8439b596f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:30.640103    8422 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca-key.pem (1679 bytes)
	I0915 06:30:30.640146    8422 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/ca.pem (1078 bytes)
	I0915 06:30:30.640179    8422 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/cert.pem (1123 bytes)
	I0915 06:30:30.640207    8422 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2359/.minikube/certs/key.pem (1675 bytes)
	I0915 06:30:30.640777    8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 06:30:30.665699    8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0915 06:30:30.689349    8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 06:30:30.712479    8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 06:30:30.737351    8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0915 06:30:30.761568    8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 06:30:30.787576    8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 06:30:30.814173    8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 06:30:30.838428    8422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2359/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 06:30:30.863319    8422 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 06:30:30.881545    8422 ssh_runner.go:195] Run: openssl version
	I0915 06:30:30.886907    8422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 06:30:30.896324    8422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:30:30.899578    8422 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:30 /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:30:30.899640    8422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:30:30.906307    8422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 06:30:30.915634    8422 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 06:30:30.918842    8422 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 06:30:30.918913    8422 kubeadm.go:392] StartCluster: {Name:addons-837740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-837740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:30:30.919050    8422 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 06:30:30.935500    8422 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 06:30:30.944221    8422 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 06:30:30.952893    8422 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0915 06:30:30.952984    8422 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 06:30:30.961657    8422 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 06:30:30.961716    8422 kubeadm.go:157] found existing configuration files:
	
	I0915 06:30:30.961779    8422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 06:30:30.970499    8422 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 06:30:30.970583    8422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 06:30:30.978660    8422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 06:30:30.987297    8422 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 06:30:30.987380    8422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 06:30:30.995730    8422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 06:30:31.004227    8422 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 06:30:31.004329    8422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 06:30:31.017031    8422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 06:30:31.025777    8422 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 06:30:31.025851    8422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 06:30:31.034902    8422 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0915 06:30:31.079133    8422 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 06:30:31.079259    8422 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 06:30:31.102260    8422 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0915 06:30:31.102344    8422 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-aws
	I0915 06:30:31.102385    8422 kubeadm.go:310] OS: Linux
	I0915 06:30:31.102434    8422 kubeadm.go:310] CGROUPS_CPU: enabled
	I0915 06:30:31.102486    8422 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0915 06:30:31.102538    8422 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0915 06:30:31.102589    8422 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0915 06:30:31.102641    8422 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0915 06:30:31.102692    8422 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0915 06:30:31.102740    8422 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0915 06:30:31.102792    8422 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0915 06:30:31.102849    8422 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0915 06:30:31.171670    8422 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 06:30:31.171782    8422 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 06:30:31.171876    8422 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 06:30:31.183792    8422 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 06:30:31.188898    8422 out.go:235]   - Generating certificates and keys ...
	I0915 06:30:31.189049    8422 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 06:30:31.189144    8422 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 06:30:31.664598    8422 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 06:30:32.017260    8422 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 06:30:32.482571    8422 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 06:30:33.022487    8422 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 06:30:33.515113    8422 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 06:30:33.515555    8422 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-837740 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:30:34.106957    8422 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 06:30:34.107267    8422 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-837740 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:30:34.392803    8422 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 06:30:34.975737    8422 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 06:30:36.016010    8422 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 06:30:36.016095    8422 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 06:30:36.506979    8422 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 06:30:36.857933    8422 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 06:30:37.487161    8422 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 06:30:37.844560    8422 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 06:30:38.021068    8422 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 06:30:38.022106    8422 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 06:30:38.025430    8422 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 06:30:38.027752    8422 out.go:235]   - Booting up control plane ...
	I0915 06:30:38.027870    8422 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 06:30:38.027957    8422 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 06:30:38.028967    8422 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 06:30:38.040889    8422 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 06:30:38.047588    8422 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 06:30:38.047844    8422 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 06:30:38.167582    8422 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 06:30:38.167707    8422 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 06:30:39.669071    8422 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.50144258s
	I0915 06:30:39.669167    8422 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 06:30:47.170334    8422 kubeadm.go:310] [api-check] The API server is healthy after 7.501385072s
	I0915 06:30:47.191798    8422 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 06:30:47.205945    8422 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 06:30:47.230922    8422 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 06:30:47.231130    8422 kubeadm.go:310] [mark-control-plane] Marking the node addons-837740 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 06:30:47.240826    8422 kubeadm.go:310] [bootstrap-token] Using token: brjfs8.a4kwxi7fgc9yosoz
	I0915 06:30:47.242826    8422 out.go:235]   - Configuring RBAC rules ...
	I0915 06:30:47.243035    8422 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 06:30:47.248309    8422 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 06:30:47.255766    8422 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 06:30:47.259379    8422 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 06:30:47.264813    8422 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 06:30:47.270028    8422 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 06:30:47.577370    8422 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 06:30:48.003173    8422 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 06:30:48.577134    8422 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 06:30:48.578350    8422 kubeadm.go:310] 
	I0915 06:30:48.578420    8422 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 06:30:48.578426    8422 kubeadm.go:310] 
	I0915 06:30:48.578502    8422 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 06:30:48.578506    8422 kubeadm.go:310] 
	I0915 06:30:48.578531    8422 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 06:30:48.578590    8422 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 06:30:48.578639    8422 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 06:30:48.578644    8422 kubeadm.go:310] 
	I0915 06:30:48.578697    8422 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 06:30:48.578701    8422 kubeadm.go:310] 
	I0915 06:30:48.578748    8422 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 06:30:48.578753    8422 kubeadm.go:310] 
	I0915 06:30:48.578804    8422 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 06:30:48.578886    8422 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 06:30:48.578953    8422 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 06:30:48.578958    8422 kubeadm.go:310] 
	I0915 06:30:48.579040    8422 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 06:30:48.579115    8422 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 06:30:48.579119    8422 kubeadm.go:310] 
	I0915 06:30:48.579202    8422 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token brjfs8.a4kwxi7fgc9yosoz \
	I0915 06:30:48.579303    8422 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f554695279f590146a1d6e30dd969f83b0e60351f554476a16c563429bd9a62b \
	I0915 06:30:48.579325    8422 kubeadm.go:310] 	--control-plane 
	I0915 06:30:48.579330    8422 kubeadm.go:310] 
	I0915 06:30:48.579419    8422 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 06:30:48.579424    8422 kubeadm.go:310] 
	I0915 06:30:48.579504    8422 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token brjfs8.a4kwxi7fgc9yosoz \
	I0915 06:30:48.579604    8422 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f554695279f590146a1d6e30dd969f83b0e60351f554476a16c563429bd9a62b 
	I0915 06:30:48.582143    8422 kubeadm.go:310] W0915 06:30:31.074967    1820 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:30:48.582457    8422 kubeadm.go:310] W0915 06:30:31.076167    1820 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:30:48.582670    8422 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-aws\n", err: exit status 1
	I0915 06:30:48.582791    8422 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 06:30:48.582812    8422 cni.go:84] Creating CNI manager for ""
	I0915 06:30:48.582827    8422 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 06:30:48.586060    8422 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0915 06:30:48.587791    8422 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0915 06:30:48.596658    8422 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0915 06:30:48.615587    8422 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 06:30:48.615704    8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:48.615770    8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-837740 minikube.k8s.io/updated_at=2024_09_15T06_30_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=addons-837740 minikube.k8s.io/primary=true
	I0915 06:30:48.753454    8422 ops.go:34] apiserver oom_adj: -16
	I0915 06:30:48.753563    8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:49.253920    8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:49.754515    8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:50.253661    8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:50.754442    8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:51.254642    8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:51.753693    8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:52.254258    8422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:52.341833    8422 kubeadm.go:1113] duration metric: took 3.726172194s to wait for elevateKubeSystemPrivileges
	I0915 06:30:52.341864    8422 kubeadm.go:394] duration metric: took 21.422979376s to StartCluster
	I0915 06:30:52.341882    8422 settings.go:142] acquiring lock: {Name:mk8198f125c4123ce66d3a387e925294953ccbbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:52.342030    8422 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-2359/kubeconfig
	I0915 06:30:52.342393    8422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/kubeconfig: {Name:mk02932df8d8a4c1b90f61568583a2b22575293e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:52.342603    8422 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 06:30:52.342705    8422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 06:30:52.342935    8422 config.go:182] Loaded profile config "addons-837740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:30:52.342967    8422 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0915 06:30:52.343046    8422 addons.go:69] Setting yakd=true in profile "addons-837740"
	I0915 06:30:52.343063    8422 addons.go:234] Setting addon yakd=true in "addons-837740"
	I0915 06:30:52.343085    8422 host.go:66] Checking if "addons-837740" exists ...
	I0915 06:30:52.343591    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:52.344027    8422 addons.go:69] Setting inspektor-gadget=true in profile "addons-837740"
	I0915 06:30:52.344050    8422 addons.go:234] Setting addon inspektor-gadget=true in "addons-837740"
	I0915 06:30:52.344074    8422 host.go:66] Checking if "addons-837740" exists ...
	I0915 06:30:52.344477    8422 out.go:177] * Verifying Kubernetes components...
	I0915 06:30:52.344719    8422 addons.go:69] Setting cloud-spanner=true in profile "addons-837740"
	I0915 06:30:52.344743    8422 addons.go:234] Setting addon cloud-spanner=true in "addons-837740"
	I0915 06:30:52.344770    8422 host.go:66] Checking if "addons-837740" exists ...
	I0915 06:30:52.344772    8422 addons.go:69] Setting metrics-server=true in profile "addons-837740"
	I0915 06:30:52.344822    8422 addons.go:234] Setting addon metrics-server=true in "addons-837740"
	I0915 06:30:52.344861    8422 host.go:66] Checking if "addons-837740" exists ...
	I0915 06:30:52.345171    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:52.345512    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:52.345894    8422 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-837740"
	I0915 06:30:52.345936    8422 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-837740"
	I0915 06:30:52.345967    8422 host.go:66] Checking if "addons-837740" exists ...
	I0915 06:30:52.346481    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:52.349269    8422 addons.go:69] Setting default-storageclass=true in profile "addons-837740"
	I0915 06:30:52.349297    8422 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-837740"
	I0915 06:30:52.349643    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:52.351831    8422 addons.go:69] Setting gcp-auth=true in profile "addons-837740"
	I0915 06:30:52.351867    8422 mustload.go:65] Loading cluster: addons-837740
	I0915 06:30:52.352187    8422 config.go:182] Loaded profile config "addons-837740": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:30:52.359680    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:52.364904    8422 addons.go:69] Setting ingress=true in profile "addons-837740"
	I0915 06:30:52.364993    8422 addons.go:234] Setting addon ingress=true in "addons-837740"
	I0915 06:30:52.365073    8422 host.go:66] Checking if "addons-837740" exists ...
	I0915 06:30:52.365820    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:52.372443    8422 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-837740"
	I0915 06:30:52.372484    8422 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-837740"
	I0915 06:30:52.372520    8422 host.go:66] Checking if "addons-837740" exists ...
	I0915 06:30:52.373000    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:52.387743    8422 addons.go:69] Setting registry=true in profile "addons-837740"
	I0915 06:30:52.387788    8422 addons.go:234] Setting addon registry=true in "addons-837740"
	I0915 06:30:52.387825    8422 host.go:66] Checking if "addons-837740" exists ...
	I0915 06:30:52.388299    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:52.396102    8422 addons.go:69] Setting ingress-dns=true in profile "addons-837740"
	I0915 06:30:52.396174    8422 addons.go:234] Setting addon ingress-dns=true in "addons-837740"
	I0915 06:30:52.398247    8422 host.go:66] Checking if "addons-837740" exists ...
	I0915 06:30:52.398806    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:52.425321    8422 addons.go:69] Setting storage-provisioner=true in profile "addons-837740"
	I0915 06:30:52.425362    8422 addons.go:234] Setting addon storage-provisioner=true in "addons-837740"
	I0915 06:30:52.425402    8422 host.go:66] Checking if "addons-837740" exists ...
	I0915 06:30:52.425475    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:52.425856    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:52.450294    8422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:52.476287    8422 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-837740"
	I0915 06:30:52.476323    8422 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-837740"
	I0915 06:30:52.476676    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:52.486281    8422 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0915 06:30:52.491489    8422 addons.go:69] Setting volcano=true in profile "addons-837740"
	I0915 06:30:52.491523    8422 addons.go:234] Setting addon volcano=true in "addons-837740"
	I0915 06:30:52.491559    8422 host.go:66] Checking if "addons-837740" exists ...
	I0915 06:30:52.492025    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:52.524840    8422 addons.go:69] Setting volumesnapshots=true in profile "addons-837740"
	I0915 06:30:52.529213    8422 addons.go:234] Setting addon volumesnapshots=true in "addons-837740"
	I0915 06:30:52.529289    8422 host.go:66] Checking if "addons-837740" exists ...
	I0915 06:30:52.570168    8422 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0915 06:30:52.570241    8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0915 06:30:52.570335    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:52.594094    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:52.598962    8422 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0915 06:30:52.617832    8422 host.go:66] Checking if "addons-837740" exists ...
	I0915 06:30:52.628331    8422 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0915 06:30:52.628450    8422 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0915 06:30:52.630038    8422 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 06:30:52.630062    8422 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0915 06:30:52.630127    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:52.632074    8422 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:30:52.637435    8422 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:30:52.639751    8422 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:30:52.639773    8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0915 06:30:52.639912    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:52.644727    8422 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0915 06:30:52.647582    8422 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:30:52.647652    8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0915 06:30:52.647748    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:52.617921    8422 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0915 06:30:52.667783    8422 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0915 06:30:52.667809    8422 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0915 06:30:52.667881    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:52.625218    8422 addons.go:234] Setting addon default-storageclass=true in "addons-837740"
	I0915 06:30:52.674533    8422 host.go:66] Checking if "addons-837740" exists ...
	I0915 06:30:52.675001    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:52.678849    8422 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0915 06:30:52.680867    8422 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:30:52.680890    8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0915 06:30:52.680954    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:52.696243    8422 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0915 06:30:52.700473    8422 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0915 06:30:52.702625    8422 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0915 06:30:52.707019    8422 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0915 06:30:52.711432    8422 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0915 06:30:52.713401    8422 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0915 06:30:52.717753    8422 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0915 06:30:52.718340    8422 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0915 06:30:52.717764    8422 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 06:30:52.719877    8422 out.go:177]   - Using image docker.io/registry:2.8.3
	I0915 06:30:52.719315    8422 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0915 06:30:52.719565    8422 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-837740"
	I0915 06:30:52.734995    8422 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:30:52.735014    8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 06:30:52.735077    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:52.742336    8422 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0915 06:30:52.742416    8422 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0915 06:30:52.742521    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:52.754703    8422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0915 06:30:52.754784    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:52.755837    8422 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0915 06:30:52.757970    8422 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0915 06:30:52.759882    8422 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0915 06:30:52.767275    8422 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0915 06:30:52.767440    8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0915 06:30:52.772811    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:52.778356    8422 host.go:66] Checking if "addons-837740" exists ...
	I0915 06:30:52.778825    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:30:52.798088    8422 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0915 06:30:52.798372    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:52.799293    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:52.800254    8422 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0915 06:30:52.800271    8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0915 06:30:52.800329    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:52.834245    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:52.834963    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:52.848635    8422 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0915 06:30:52.855724    8422 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 06:30:52.855746    8422 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 06:30:52.855803    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:52.857628    8422 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0915 06:30:52.857692    8422 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0915 06:30:52.857777    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:52.875682    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:52.903216    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:52.940106    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:52.944234    8422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 06:30:52.950257    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:52.955760    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:52.972182    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:52.972829    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:52.977571    8422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:30:52.985021    8422 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0915 06:30:52.987363    8422 out.go:177]   - Using image docker.io/busybox:stable
	I0915 06:30:52.987767    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:52.989967    8422 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:30:52.990060    8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0915 06:30:52.990126    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	W0915 06:30:53.012274    8422 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0915 06:30:53.012316    8422 retry.go:31] will retry after 330.561655ms: ssh: handshake failed: EOF
	I0915 06:30:53.025820    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:53.034226    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:30:53.469649    8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:30:53.579647    8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0915 06:30:53.587309    8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:30:53.658478    8422 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 06:30:53.658506    8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0915 06:30:53.677491    8422 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0915 06:30:53.677528    8422 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0915 06:30:53.684567    8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:30:53.741538    8422 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0915 06:30:53.741562    8422 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0915 06:30:53.759773    8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:30:53.778582    8422 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0915 06:30:53.778608    8422 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0915 06:30:53.805896    8422 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0915 06:30:53.805922    8422 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0915 06:30:53.821559    8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:30:53.947952    8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0915 06:30:53.949436    8422 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0915 06:30:53.949467    8422 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0915 06:30:54.070300    8422 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0915 06:30:54.070324    8422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0915 06:30:54.156683    8422 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 06:30:54.156724    8422 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0915 06:30:54.180720    8422 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0915 06:30:54.180747    8422 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0915 06:30:54.299118    8422 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0915 06:30:54.299144    8422 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0915 06:30:54.391059    8422 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:30:54.391083    8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0915 06:30:54.590282    8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:30:54.741747    8422 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0915 06:30:54.741774    8422 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0915 06:30:54.763220    8422 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0915 06:30:54.763246    8422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0915 06:30:54.781434    8422 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:30:54.781459    8422 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0915 06:30:54.791889    8422 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0915 06:30:54.791914    8422 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0915 06:30:54.811362    8422 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0915 06:30:54.811390    8422 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0915 06:30:54.859134    8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:30:54.933013    8422 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0915 06:30:54.933041    8422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0915 06:30:54.972716    8422 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:30:54.972739    8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0915 06:30:55.024128    8422 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0915 06:30:55.024175    8422 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0915 06:30:55.035127    8422 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0915 06:30:55.035157    8422 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0915 06:30:55.055284    8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:30:55.162891    8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:30:55.238528    8422 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0915 06:30:55.238560    8422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0915 06:30:55.271807    8422 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:30:55.271839    8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0915 06:30:55.344045    8422 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0915 06:30:55.344076    8422 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0915 06:30:55.516101    8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:30:55.611415    8422 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0915 06:30:55.611461    8422 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0915 06:30:55.650155    8422 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.705891344s)
	I0915 06:30:55.650275    8422 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0915 06:30:55.650207    8422 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.672615235s)
	I0915 06:30:55.651130    8422 node_ready.go:35] waiting up to 6m0s for node "addons-837740" to be "Ready" ...
	I0915 06:30:55.650229    8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.180557832s)
	I0915 06:30:55.657403    8422 node_ready.go:49] node "addons-837740" has status "Ready":"True"
	I0915 06:30:55.657433    8422 node_ready.go:38] duration metric: took 6.279416ms for node "addons-837740" to be "Ready" ...
	I0915 06:30:55.657442    8422 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:30:55.677900    8422 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bqq5d" in "kube-system" namespace to be "Ready" ...
	I0915 06:30:55.862100    8422 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0915 06:30:55.862131    8422 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0915 06:30:55.867416    8422 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0915 06:30:55.867455    8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0915 06:30:56.155045    8422 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-837740" context rescaled to 1 replicas
	I0915 06:30:56.221537    8422 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:30:56.221568    8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0915 06:30:56.221856    8422 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0915 06:30:56.221882    8422 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0915 06:30:56.577111    8422 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0915 06:30:56.577187    8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0915 06:30:56.600663    8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:30:56.710538    8422 pod_ready.go:93] pod "coredns-7c65d6cfc9-bqq5d" in "kube-system" namespace has status "Ready":"True"
	I0915 06:30:56.710567    8422 pod_ready.go:82] duration metric: took 1.032631894s for pod "coredns-7c65d6cfc9-bqq5d" in "kube-system" namespace to be "Ready" ...
	I0915 06:30:56.710579    8422 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wglrg" in "kube-system" namespace to be "Ready" ...
	I0915 06:30:56.873960    8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.294275971s)
	I0915 06:30:56.874142    8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.286806951s)
	I0915 06:30:57.116832    8422 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0915 06:30:57.116894    8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0915 06:30:57.732369    8422 pod_ready.go:93] pod "coredns-7c65d6cfc9-wglrg" in "kube-system" namespace has status "Ready":"True"
	I0915 06:30:57.732447    8422 pod_ready.go:82] duration metric: took 1.021860536s for pod "coredns-7c65d6cfc9-wglrg" in "kube-system" namespace to be "Ready" ...
	I0915 06:30:57.732473    8422 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-837740" in "kube-system" namespace to be "Ready" ...
	I0915 06:30:57.762630    8422 pod_ready.go:93] pod "etcd-addons-837740" in "kube-system" namespace has status "Ready":"True"
	I0915 06:30:57.762695    8422 pod_ready.go:82] duration metric: took 30.200526ms for pod "etcd-addons-837740" in "kube-system" namespace to be "Ready" ...
	I0915 06:30:57.762720    8422 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-837740" in "kube-system" namespace to be "Ready" ...
	I0915 06:30:57.791709    8422 pod_ready.go:93] pod "kube-apiserver-addons-837740" in "kube-system" namespace has status "Ready":"True"
	I0915 06:30:57.791784    8422 pod_ready.go:82] duration metric: took 29.044563ms for pod "kube-apiserver-addons-837740" in "kube-system" namespace to be "Ready" ...
	I0915 06:30:57.791811    8422 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-837740" in "kube-system" namespace to be "Ready" ...
	I0915 06:30:57.807110    8422 pod_ready.go:93] pod "kube-controller-manager-addons-837740" in "kube-system" namespace has status "Ready":"True"
	I0915 06:30:57.807175    8422 pod_ready.go:82] duration metric: took 15.344201ms for pod "kube-controller-manager-addons-837740" in "kube-system" namespace to be "Ready" ...
	I0915 06:30:57.807201    8422 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vjdxv" in "kube-system" namespace to be "Ready" ...
	I0915 06:30:58.054482    8422 pod_ready.go:93] pod "kube-proxy-vjdxv" in "kube-system" namespace has status "Ready":"True"
	I0915 06:30:58.054579    8422 pod_ready.go:82] duration metric: took 247.356077ms for pod "kube-proxy-vjdxv" in "kube-system" namespace to be "Ready" ...
	I0915 06:30:58.054611    8422 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-837740" in "kube-system" namespace to be "Ready" ...
	I0915 06:30:58.100627    8422 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:30:58.100694    8422 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0915 06:30:58.122052    8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.437447888s)
	I0915 06:30:58.455228    8422 pod_ready.go:93] pod "kube-scheduler-addons-837740" in "kube-system" namespace has status "Ready":"True"
	I0915 06:30:58.455295    8422 pod_ready.go:82] duration metric: took 400.645849ms for pod "kube-scheduler-addons-837740" in "kube-system" namespace to be "Ready" ...
	I0915 06:30:58.455329    8422 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-tt4ct" in "kube-system" namespace to be "Ready" ...
	I0915 06:30:58.636800    8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:30:59.743893    8422 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0915 06:30:59.744019    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:30:59.772013    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:31:00.498138    8422 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-tt4ct" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:00.839311    8422 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0915 06:31:01.144376    8422 addons.go:234] Setting addon gcp-auth=true in "addons-837740"
	I0915 06:31:01.144483    8422 host.go:66] Checking if "addons-837740" exists ...
	I0915 06:31:01.144999    8422 cli_runner.go:164] Run: docker container inspect addons-837740 --format={{.State.Status}}
	I0915 06:31:01.171298    8422 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0915 06:31:01.171398    8422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-837740
	I0915 06:31:01.195101    8422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/addons-837740/id_rsa Username:docker}
	I0915 06:31:02.962472    8422 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-tt4ct" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:03.243814    8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.422220417s)
	I0915 06:31:03.243880    8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.484083316s)
	I0915 06:31:03.243897    8422 addons.go:475] Verifying addon ingress=true in "addons-837740"
	I0915 06:31:03.246172    8422 out.go:177] * Verifying ingress addon...
	I0915 06:31:03.249453    8422 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0915 06:31:03.253659    8422 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0915 06:31:03.253689    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:03.754543    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:04.310699    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:04.785972    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:04.979597    8422 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-tt4ct" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:05.303688    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:05.381688    8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.433701251s)
	I0915 06:31:05.381793    8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.791485476s)
	I0915 06:31:05.382043    8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.522881759s)
	I0915 06:31:05.382084    8422 addons.go:475] Verifying addon registry=true in "addons-837740"
	I0915 06:31:05.382331    8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.327014308s)
	I0915 06:31:05.382368    8422 addons.go:475] Verifying addon metrics-server=true in "addons-837740"
	I0915 06:31:05.382448    8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.219494557s)
	I0915 06:31:05.382710    8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.866576126s)
	W0915 06:31:05.382737    8422 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:31:05.382752    8422 retry.go:31] will retry after 292.718465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:31:05.382815    8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.782083545s)
	I0915 06:31:05.385111    8422 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-837740 service yakd-dashboard -n yakd-dashboard
	
	I0915 06:31:05.385121    8422 out.go:177] * Verifying registry addon...
	I0915 06:31:05.388627    8422 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0915 06:31:05.452051    8422 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:31:05.452074    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:05.676083    8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:31:05.794222    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:05.894401    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:06.257441    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:06.371074    8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.734155373s)
	I0915 06:31:06.371245    8422 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-837740"
	I0915 06:31:06.371196    8422 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.199838625s)
	I0915 06:31:06.373677    8422 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:31:06.373762    8422 out.go:177] * Verifying csi-hostpath-driver addon...
	I0915 06:31:06.376746    8422 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0915 06:31:06.377481    8422 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0915 06:31:06.379377    8422 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0915 06:31:06.379439    8422 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0915 06:31:06.388908    8422 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:31:06.388984    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:06.392286    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:06.445609    8422 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0915 06:31:06.445688    8422 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0915 06:31:06.487848    8422 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:31:06.487919    8422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0915 06:31:06.571993    8422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:31:06.753824    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:06.884720    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:06.892459    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:07.254674    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:07.383169    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:07.399810    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:07.495117    8422 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-tt4ct" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:07.754760    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:07.882386    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:07.985233    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:08.092864    8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.416683008s)
	I0915 06:31:08.093017    8422 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.520952208s)
	I0915 06:31:08.095982    8422 addons.go:475] Verifying addon gcp-auth=true in "addons-837740"
	I0915 06:31:08.098652    8422 out.go:177] * Verifying gcp-auth addon...
	I0915 06:31:08.101212    8422 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0915 06:31:08.104811    8422 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 06:31:08.254502    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:08.383216    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:08.393007    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:08.754922    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:08.882436    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:08.892712    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:09.254472    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:09.382834    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:09.393182    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:09.761232    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:09.882704    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:09.892757    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:09.962949    8422 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-tt4ct" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:10.255429    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:10.382149    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:10.392450    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:10.754331    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:10.883315    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:10.892098    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:11.256370    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:11.382995    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:11.393047    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:11.468274    8422 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-tt4ct" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:11.468351    8422 pod_ready.go:82] duration metric: took 13.012999524s for pod "nvidia-device-plugin-daemonset-tt4ct" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:11.468377    8422 pod_ready.go:39] duration metric: took 15.810922224s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:31:11.468423    8422 api_server.go:52] waiting for apiserver process to appear ...
	I0915 06:31:11.468520    8422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:31:11.486103    8422 api_server.go:72] duration metric: took 19.143465933s to wait for apiserver process to appear ...
	I0915 06:31:11.486131    8422 api_server.go:88] waiting for apiserver healthz status ...
	I0915 06:31:11.486153    8422 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0915 06:31:11.494253    8422 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0915 06:31:11.495595    8422 api_server.go:141] control plane version: v1.31.1
	I0915 06:31:11.495626    8422 api_server.go:131] duration metric: took 9.487022ms to wait for apiserver health ...
	I0915 06:31:11.495635    8422 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 06:31:11.504819    8422 system_pods.go:59] 17 kube-system pods found
	I0915 06:31:11.504857    8422 system_pods.go:61] "coredns-7c65d6cfc9-wglrg" [b6844185-6d57-460b-bedc-75eb27fab2b2] Running
	I0915 06:31:11.504870    8422 system_pods.go:61] "csi-hostpath-attacher-0" [4259dd24-69b8-4f9a-b344-93e221d119f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 06:31:11.504879    8422 system_pods.go:61] "csi-hostpath-resizer-0" [f7ab10d0-07f7-49fe-94e9-83b4b658c0cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 06:31:11.504890    8422 system_pods.go:61] "csi-hostpathplugin-m2zjj" [6897b926-699d-4e69-858b-dfb3b5ae22a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 06:31:11.504901    8422 system_pods.go:61] "etcd-addons-837740" [54093b45-848e-42fb-9d63-0326870285f2] Running
	I0915 06:31:11.504906    8422 system_pods.go:61] "kube-apiserver-addons-837740" [955b748d-a741-45cf-9d92-dff6d388b528] Running
	I0915 06:31:11.504914    8422 system_pods.go:61] "kube-controller-manager-addons-837740" [4b8af77d-cf8c-4f57-9308-bb8e3f97ead7] Running
	I0915 06:31:11.504921    8422 system_pods.go:61] "kube-ingress-dns-minikube" [226b1200-80f2-453e-910a-99218aad1e1d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0915 06:31:11.504928    8422 system_pods.go:61] "kube-proxy-vjdxv" [e1764451-6cbe-4223-b73f-5a1621e02c92] Running
	I0915 06:31:11.504933    8422 system_pods.go:61] "kube-scheduler-addons-837740" [0d56d017-af93-4829-b0b4-34fa2a27834a] Running
	I0915 06:31:11.504939    8422 system_pods.go:61] "metrics-server-84c5f94fbc-bgbxc" [f10bfbc8-7858-4a49-9947-c358eaefb7b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 06:31:11.504945    8422 system_pods.go:61] "nvidia-device-plugin-daemonset-tt4ct" [201ece5f-7d16-40c8-b54a-2afc0f9b1595] Running
	I0915 06:31:11.504951    8422 system_pods.go:61] "registry-66c9cd494c-7gzvx" [1a2130f7-6cbe-4a8b-bea3-e3e4436003d2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0915 06:31:11.504957    8422 system_pods.go:61] "registry-proxy-htg6g" [53474271-c9f2-4050-bf68-df5e1935aa85] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0915 06:31:11.504971    8422 system_pods.go:61] "snapshot-controller-56fcc65765-2rhl5" [a143ff8a-8d41-45ea-82b5-9097104ed247] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:31:11.504978    8422 system_pods.go:61] "snapshot-controller-56fcc65765-pbftt" [f2ca4d13-a7b0-41a7-a845-13c6e7c1e7ed] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:31:11.504982    8422 system_pods.go:61] "storage-provisioner" [651c275e-abb9-49f8-b7b5-d3928708b097] Running
	I0915 06:31:11.504992    8422 system_pods.go:74] duration metric: took 9.350022ms to wait for pod list to return data ...
	I0915 06:31:11.505003    8422 default_sa.go:34] waiting for default service account to be created ...
	I0915 06:31:11.507963    8422 default_sa.go:45] found service account: "default"
	I0915 06:31:11.507992    8422 default_sa.go:55] duration metric: took 2.982769ms for default service account to be created ...
	I0915 06:31:11.508001    8422 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 06:31:11.518544    8422 system_pods.go:86] 17 kube-system pods found
	I0915 06:31:11.518625    8422 system_pods.go:89] "coredns-7c65d6cfc9-wglrg" [b6844185-6d57-460b-bedc-75eb27fab2b2] Running
	I0915 06:31:11.518650    8422 system_pods.go:89] "csi-hostpath-attacher-0" [4259dd24-69b8-4f9a-b344-93e221d119f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 06:31:11.518697    8422 system_pods.go:89] "csi-hostpath-resizer-0" [f7ab10d0-07f7-49fe-94e9-83b4b658c0cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 06:31:11.518726    8422 system_pods.go:89] "csi-hostpathplugin-m2zjj" [6897b926-699d-4e69-858b-dfb3b5ae22a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 06:31:11.518747    8422 system_pods.go:89] "etcd-addons-837740" [54093b45-848e-42fb-9d63-0326870285f2] Running
	I0915 06:31:11.518781    8422 system_pods.go:89] "kube-apiserver-addons-837740" [955b748d-a741-45cf-9d92-dff6d388b528] Running
	I0915 06:31:11.518805    8422 system_pods.go:89] "kube-controller-manager-addons-837740" [4b8af77d-cf8c-4f57-9308-bb8e3f97ead7] Running
	I0915 06:31:11.518829    8422 system_pods.go:89] "kube-ingress-dns-minikube" [226b1200-80f2-453e-910a-99218aad1e1d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0915 06:31:11.518860    8422 system_pods.go:89] "kube-proxy-vjdxv" [e1764451-6cbe-4223-b73f-5a1621e02c92] Running
	I0915 06:31:11.518884    8422 system_pods.go:89] "kube-scheduler-addons-837740" [0d56d017-af93-4829-b0b4-34fa2a27834a] Running
	I0915 06:31:11.518906    8422 system_pods.go:89] "metrics-server-84c5f94fbc-bgbxc" [f10bfbc8-7858-4a49-9947-c358eaefb7b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 06:31:11.518951    8422 system_pods.go:89] "nvidia-device-plugin-daemonset-tt4ct" [201ece5f-7d16-40c8-b54a-2afc0f9b1595] Running
	I0915 06:31:11.518978    8422 system_pods.go:89] "registry-66c9cd494c-7gzvx" [1a2130f7-6cbe-4a8b-bea3-e3e4436003d2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0915 06:31:11.519001    8422 system_pods.go:89] "registry-proxy-htg6g" [53474271-c9f2-4050-bf68-df5e1935aa85] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0915 06:31:11.519039    8422 system_pods.go:89] "snapshot-controller-56fcc65765-2rhl5" [a143ff8a-8d41-45ea-82b5-9097104ed247] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:31:11.519061    8422 system_pods.go:89] "snapshot-controller-56fcc65765-pbftt" [f2ca4d13-a7b0-41a7-a845-13c6e7c1e7ed] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:31:11.519079    8422 system_pods.go:89] "storage-provisioner" [651c275e-abb9-49f8-b7b5-d3928708b097] Running
	I0915 06:31:11.519117    8422 system_pods.go:126] duration metric: took 11.107413ms to wait for k8s-apps to be running ...
	I0915 06:31:11.519139    8422 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 06:31:11.519232    8422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:31:11.534655    8422 system_svc.go:56] duration metric: took 15.497771ms WaitForService to wait for kubelet
	I0915 06:31:11.534732    8422 kubeadm.go:582] duration metric: took 19.19209902s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:31:11.534781    8422 node_conditions.go:102] verifying NodePressure condition ...
	I0915 06:31:11.538526    8422 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0915 06:31:11.538608    8422 node_conditions.go:123] node cpu capacity is 2
	I0915 06:31:11.538633    8422 node_conditions.go:105] duration metric: took 3.812258ms to run NodePressure ...
	I0915 06:31:11.538657    8422 start.go:241] waiting for startup goroutines ...
	I0915 06:31:11.757093    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:11.885155    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:11.893664    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:12.255360    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:12.382863    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:12.393185    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:12.754325    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:12.881788    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:12.892037    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:13.254545    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:13.382845    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:13.392737    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:13.753725    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:13.882241    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:13.892028    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:14.254320    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:14.386265    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:14.392972    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:14.753500    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:14.882807    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:14.892120    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:15.254199    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:15.382950    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:15.392494    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:15.754020    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:15.882662    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:15.892296    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:16.254888    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:16.382418    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:16.393250    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:16.754039    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:16.882241    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:16.892621    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:17.254624    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:17.382485    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:17.393310    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:17.755257    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:17.883602    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:17.893211    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:18.259780    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:18.383880    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:18.393748    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:18.754743    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:18.882696    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:18.892540    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:19.253772    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:19.383228    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:19.393074    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:19.753792    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:19.883440    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:19.892913    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:20.255588    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:20.382877    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:20.393263    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:20.754949    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:20.883109    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:20.892431    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:21.253676    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:21.383155    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:21.392403    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:21.754441    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:21.881779    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:21.892685    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:22.254178    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:22.386931    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:22.392753    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:22.754101    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:22.882483    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:22.893343    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:23.254567    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:23.382710    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:23.392946    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:23.753900    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:23.882587    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:23.892235    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:24.253912    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:24.382640    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:24.392170    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:24.754461    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:24.883245    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:24.892365    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:25.254512    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:25.383120    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:25.392799    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:25.754325    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:25.882976    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:25.893193    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:26.254244    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:26.382872    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:26.392943    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:26.754588    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:26.884131    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:26.892714    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:27.256212    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:27.383744    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:27.392991    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:27.754117    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:27.883430    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:27.894442    8422 kapi.go:107] duration metric: took 22.505814726s to wait for kubernetes.io/minikube-addons=registry ...
	I0915 06:31:28.254392    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:28.382953    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:28.754211    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:28.882150    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:29.254186    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:29.382438    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:29.754447    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:29.882147    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:30.257168    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:30.382948    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:30.754165    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:30.882885    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:31.254556    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:31.382398    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:31.754056    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:31.883308    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:32.254638    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:32.383492    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:32.755487    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:32.884106    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:33.254838    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:33.385272    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:33.754789    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:33.882914    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:34.254488    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:34.383030    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:34.754517    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:34.882289    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:35.261128    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:35.382595    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:35.755502    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:35.882203    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:36.254159    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:36.391613    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:36.754866    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:36.885846    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:37.254761    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:37.383119    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:37.754739    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:37.882657    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:38.253781    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:38.382800    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:38.754574    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:38.882536    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:39.254937    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:39.382914    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:39.760491    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:39.884782    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:40.253844    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:40.382741    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:40.757903    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:40.888293    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:41.255242    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:41.383399    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:41.754387    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:41.882927    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:42.255922    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:42.383652    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:42.754662    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:42.883036    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:43.254252    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:43.383196    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:43.753750    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:43.882467    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:44.255434    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:44.383311    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:44.754587    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:44.886533    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:45.255415    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:45.384329    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:45.753651    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:45.883542    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:46.255516    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:46.382297    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:46.754440    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:46.882768    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:47.253931    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:47.383795    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:47.754506    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:47.882595    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:48.254332    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:48.383253    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:48.754824    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:48.882313    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:49.253628    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:49.382245    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:49.755581    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:49.882703    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:50.254161    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:50.397553    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:50.761126    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:50.886439    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:51.258543    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:51.383474    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:51.754795    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:51.882228    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:52.254792    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:52.382538    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:52.754377    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:52.883287    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:53.254655    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:53.382333    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:53.754883    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:53.882581    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:54.254856    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:54.382835    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:54.754782    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:54.882221    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:55.254259    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:55.382641    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:55.753779    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:55.882148    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:56.254427    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:56.381759    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:56.754630    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:56.883274    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:57.255042    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:57.382494    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:57.754195    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:57.882524    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:58.253836    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:58.382372    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:58.758367    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:58.882893    8422 kapi.go:107] duration metric: took 52.505409513s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0915 06:31:59.253959    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:59.754456    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:00.265549    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:00.755547    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:01.254128    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:01.754077    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:02.253874    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:02.758995    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:03.254675    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:03.754249    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:04.253794    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:04.754044    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:05.253945    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:05.754526    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:06.254157    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:06.753899    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:07.254431    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:07.753796    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:08.254232    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:08.753738    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:09.256547    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:09.754651    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:10.253852    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:10.753973    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:11.254402    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:11.754150    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:12.255858    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:12.755873    8422 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:32:13.259355    8422 kapi.go:107] duration metric: took 1m10.009898336s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0915 06:32:30.106100    8422 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 06:32:30.106132    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:30.605881    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:31.105279    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:31.604945    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:32.105186    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:32.604231    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:33.105772    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:33.605525    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:34.105268    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:34.604720    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:35.105952    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:35.605051    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:36.105213    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:36.604877    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:37.105460    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:37.605672    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:38.104711    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:38.604819    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:39.104896    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:39.604760    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:40.105539    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:40.605571    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:41.106572    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:41.604977    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:42.114040    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:42.605028    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:43.108929    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:43.604614    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:44.106042    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:44.605810    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:45.106244    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:45.605553    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:46.105844    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:46.604655    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:47.105085    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:47.605377    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:48.105325    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:48.604660    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:49.105787    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:49.604681    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:50.104654    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:50.605443    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:51.105380    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:51.605028    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:52.104483    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:52.605549    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:53.104801    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:53.604454    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:54.105516    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:54.605020    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:55.105247    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:55.605095    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:56.105120    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:56.605298    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:57.106194    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:57.604630    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:58.105810    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:58.605945    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:59.105305    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:59.605211    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:00.114101    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:00.606350    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:01.106444    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:01.604475    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:02.105101    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:02.605340    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:03.105069    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:03.604633    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:04.105476    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:04.605013    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:05.104820    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:05.605332    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:06.110691    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:06.604969    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:07.106147    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:07.605552    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:08.105452    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:08.605411    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:09.105470    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:09.605466    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:10.105537    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:10.605666    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:11.105667    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:11.606023    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:12.104777    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:12.604287    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:13.105685    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:13.605265    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:14.105625    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:14.611145    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:15.105781    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:15.604757    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:16.104518    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:16.604853    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:17.105667    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:17.604650    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:18.106331    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:18.605481    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:19.105506    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:19.605343    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:20.104983    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:20.604769    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:21.106150    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:21.604474    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:22.105183    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:22.605320    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:23.106654    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:23.604739    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:24.105163    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:24.605523    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:25.105670    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:25.605502    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:26.105159    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:26.607213    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:27.104849    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:27.605951    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:28.104959    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:28.604540    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:29.105782    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:29.604966    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:30.105838    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:30.604818    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:31.104723    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:31.605196    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:32.105056    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:32.605057    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:33.106344    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:33.605698    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:34.105974    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:34.604782    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:35.105198    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:35.605547    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:36.105873    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:36.605053    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:37.105087    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:37.605114    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:38.106338    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:38.605076    8422 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:33:39.106898    8422 kapi.go:107] duration metric: took 2m31.005685267s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0915 06:33:39.109602    8422 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-837740 cluster.
	I0915 06:33:39.111761    8422 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0915 06:33:39.114067    8422 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0915 06:33:39.116350    8422 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner-rancher, storage-provisioner, volcano, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0915 06:33:39.118718    8422 addons.go:510] duration metric: took 2m46.775743472s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner-rancher storage-provisioner volcano metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0915 06:33:39.118770    8422 start.go:246] waiting for cluster config update ...
	I0915 06:33:39.118792    8422 start.go:255] writing updated cluster config ...
	I0915 06:33:39.119069    8422 ssh_runner.go:195] Run: rm -f paused
	I0915 06:33:39.498467    8422 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 06:33:39.500857    8422 out.go:177] * Done! kubectl is now configured to use "addons-837740" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 15 06:43:09 addons-837740 dockerd[1283]: time="2024-09-15T06:43:09.849914988Z" level=info msg="ignoring event" container=17350eb906526d1cdde2a4d4fd509447f3457a8bb24f7d71a2548c5a64cfc691 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:09 addons-837740 dockerd[1283]: time="2024-09-15T06:43:09.868192544Z" level=info msg="ignoring event" container=4bee5472e10b4907d0e0d39511e68d8778c6611db384488a4f9eaa2293076903 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:09 addons-837740 dockerd[1283]: time="2024-09-15T06:43:09.888555920Z" level=info msg="ignoring event" container=687f6cd8a45bd6b244da1fd9fdbca26d94e69ee26439ae695f9a29214a50340d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:09 addons-837740 dockerd[1283]: time="2024-09-15T06:43:09.971459466Z" level=info msg="ignoring event" container=6b294fd05e77d3c960de00f514fea241a5a11c4d6e1604267eefc3fe2820b63a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:09 addons-837740 dockerd[1283]: time="2024-09-15T06:43:09.978527052Z" level=info msg="ignoring event" container=d5b301403156dc4a6d9b072300791fea1085cbc90d5e1c2b3ec9f62a60b70a14 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:10 addons-837740 dockerd[1283]: time="2024-09-15T06:43:10.080604652Z" level=info msg="ignoring event" container=71649d8ca1311f2ebbe2004db2ed56a44df9d3a1989e1f5dd061b056ff1d8698 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:10 addons-837740 dockerd[1283]: time="2024-09-15T06:43:10.179725320Z" level=info msg="ignoring event" container=69a0efde4f17218f1cd7942ce79ec392f788c7b3c9dc9a6ca86e2a18945aff75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:10 addons-837740 dockerd[1283]: time="2024-09-15T06:43:10.219264260Z" level=info msg="ignoring event" container=d814e8b2e2b89dc56b69eacfc3c3ef2e4894b563f2b6fdacf2ed20529053a843 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:13 addons-837740 cri-dockerd[1542]: time="2024-09-15T06:43:13Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 15 06:43:13 addons-837740 dockerd[1283]: time="2024-09-15T06:43:13.686464163Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 15 06:43:13 addons-837740 dockerd[1283]: time="2024-09-15T06:43:13.689130381Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 15 06:43:14 addons-837740 dockerd[1283]: time="2024-09-15T06:43:14.662954216Z" level=info msg="ignoring event" container=3f46650b6dd268c5a1476ba015e9087e54f5d9b549fca258a34436b53fc8ee9d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:16 addons-837740 dockerd[1283]: time="2024-09-15T06:43:16.427701971Z" level=info msg="ignoring event" container=c2fe5b3d8de6a6cf17e7bbf02209f630d939c687ce57fd78ae95776c9fd94995 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:16 addons-837740 dockerd[1283]: time="2024-09-15T06:43:16.447995670Z" level=info msg="ignoring event" container=fd3f1ab94bcda2093374499980a9f67c33628b620c0cc4b96803f9472e1a220d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:16 addons-837740 dockerd[1283]: time="2024-09-15T06:43:16.615581941Z" level=info msg="ignoring event" container=f377f9ac097a400de8a0883500d32b4f6abd638c22aef91a4762ba8350d15710 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:16 addons-837740 dockerd[1283]: time="2024-09-15T06:43:16.636278200Z" level=info msg="ignoring event" container=59a04b328fcdab08a0f4647fd04f0a1fdbaa8d6a9b7c71700c158b3774dc1c49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:23 addons-837740 dockerd[1283]: time="2024-09-15T06:43:23.178957964Z" level=info msg="ignoring event" container=7c0c3036c0a2d5acd2babaadf1462c4e2a9bf95299afd8c21ff6e8aa7178a4d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:23 addons-837740 dockerd[1283]: time="2024-09-15T06:43:23.301935763Z" level=info msg="ignoring event" container=0e1c8d4e0ea0f3728998fe5b6a9ac1ce7b97121e1498881e67e209f008e7f6c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:27 addons-837740 dockerd[1283]: time="2024-09-15T06:43:27.794155287Z" level=info msg="ignoring event" container=9a84052494ec7d2432a715fcf58f2e614975f9f7102d47422ccb553158aea38b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:34 addons-837740 cri-dockerd[1542]: time="2024-09-15T06:43:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1234694c78ad9e8ecd8931c073c64ca75118f5c2dc288e47d54f89b739dc4cf3/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 15 06:43:34 addons-837740 dockerd[1283]: time="2024-09-15T06:43:34.477955216Z" level=info msg="ignoring event" container=4aa7443cf5a49d63ffcd3fec8f8c32fa724815130e094fb6da70b4c202f2b193 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:35 addons-837740 dockerd[1283]: time="2024-09-15T06:43:35.297030233Z" level=info msg="ignoring event" container=3b04ae659313ca8907015471a4a2a448ca2db0bbd685d3d461b94db188a79b3c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:35 addons-837740 dockerd[1283]: time="2024-09-15T06:43:35.360508651Z" level=info msg="ignoring event" container=8bd916583c0a3ff5ec6e87cbe9622deabdbc17a9a77d70ee134c30580e20fb78 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:35 addons-837740 dockerd[1283]: time="2024-09-15T06:43:35.667250605Z" level=info msg="ignoring event" container=9ba3c3ed633c3368e208ef18ddde9526220e23b6df04d6930e5b1e039bed7dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:43:35 addons-837740 dockerd[1283]: time="2024-09-15T06:43:35.788570422Z" level=info msg="ignoring event" container=21b8d568ae181fc9fdc7cd300ac225558db58efea00dfb444abc3db449b38932 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	5d3d92bbe7e73       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   8943506438f1e       gcp-auth-89d5ffd79-4vxbx
	1515171508f56       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   d339e04ac54e8       ingress-nginx-controller-bc57996ff-9d94n
	a84f7b7cff6ad       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                      0                   d25b54090e6ce       ingress-nginx-admission-patch-tm9xd
	cd2acbf476609       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   80bcb67b609a0       ingress-nginx-admission-create-7wph7
	5833b76ec193b       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   ccc3387930cfa       yakd-dashboard-67d98fc6b-txt6k
	5616443de8678       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   a26247b7fd47f       local-path-provisioner-86d989889c-ht8wp
	07b93ec46d2e7       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   1bd6cea006a5c       cloud-spanner-emulator-769b77f747-f96b4
	7547215974b9a       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns       0                   94a8919b20678       kube-ingress-dns-minikube
	ac1dad073bd8c       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   9a8d3e0b5c154       nvidia-device-plugin-daemonset-tt4ct
	a2553db37f09c       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   8f0abf1d61dc9       storage-provisioner
	54f5f4f11f36a       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                    0                   c08d9d16f10f8       coredns-7c65d6cfc9-wglrg
	7e4e2e7c9f9d0       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                 0                   ed5dd56271f16       kube-proxy-vjdxv
	f3f6b32525e6f       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   ca5702a4a99a4       etcd-addons-837740
	3696b00b24559       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   42747464bcc6f       kube-controller-manager-addons-837740
	35aa9c1536cf4       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   81f56909c7a3b       kube-scheduler-addons-837740
	484ea520c5e9c       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver             0                   387c88f10c455       kube-apiserver-addons-837740
	
	
	==> controller_ingress [1515171508f5] <==
	I0915 06:32:13.420502       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"3c107480-4a43-4754-9860-5286b822a234", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0915 06:32:13.420792       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"1189cc4c-e2e6-41f0-a058-700fc09bd4a4", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0915 06:32:14.596432       7 nginx.go:317] "Starting NGINX process"
	I0915 06:32:14.598042       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0915 06:32:14.599269       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0915 06:32:14.599451       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0915 06:32:14.616247       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0915 06:32:14.617568       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-9d94n"
	I0915 06:32:14.632730       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-9d94n" node="addons-837740"
	I0915 06:32:14.652133       7 controller.go:213] "Backend successfully reloaded"
	I0915 06:32:14.652251       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0915 06:32:14.652667       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-9d94n", UID:"ef9a33cf-8b7c-474e-9d5c-c747abe32cc7", APIVersion:"v1", ResourceVersion:"1231", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0915 06:43:33.271015       7 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0915 06:43:33.289273       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.019s renderingIngressLength:1 renderingIngressTime:0s admissionTime:0.019s testedConfigurationSize:18.1kB}
	I0915 06:43:33.289325       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0915 06:43:33.296852       7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0915 06:43:33.298154       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"f503ed9e-ee0f-4710-9ee6-c18661713cf2", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2768", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0915 06:43:33.298186       7 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0915 06:43:33.298342       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0915 06:43:33.355699       7 controller.go:213] "Backend successfully reloaded"
	I0915 06:43:33.356341       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-9d94n", UID:"ef9a33cf-8b7c-474e-9d5c-c747abe32cc7", APIVersion:"v1", ResourceVersion:"1231", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0915 06:43:36.631612       7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0915 06:43:36.631723       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0915 06:43:36.677007       7 controller.go:213] "Backend successfully reloaded"
	I0915 06:43:36.677685       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-9d94n", UID:"ef9a33cf-8b7c-474e-9d5c-c747abe32cc7", APIVersion:"v1", ResourceVersion:"1231", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [54f5f4f11f36] <==
	[INFO] 10.244.0.7:52374 - 38856 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112664s
	[INFO] 10.244.0.7:41098 - 3685 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002214868s
	[INFO] 10.244.0.7:41098 - 22115 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001833461s
	[INFO] 10.244.0.7:50264 - 59173 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000181029s
	[INFO] 10.244.0.7:50264 - 64038 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000100177s
	[INFO] 10.244.0.7:59330 - 5723 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000123585s
	[INFO] 10.244.0.7:59330 - 863 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000065625s
	[INFO] 10.244.0.7:59228 - 30202 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00010752s
	[INFO] 10.244.0.7:59228 - 61414 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000034749s
	[INFO] 10.244.0.7:47583 - 13835 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079123s
	[INFO] 10.244.0.7:47583 - 11276 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038121s
	[INFO] 10.244.0.7:36593 - 33327 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001495376s
	[INFO] 10.244.0.7:36593 - 5155 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001127295s
	[INFO] 10.244.0.7:54986 - 56486 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000059085s
	[INFO] 10.244.0.7:54986 - 5017 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000046958s
	[INFO] 10.244.0.25:40158 - 62681 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.004309976s
	[INFO] 10.244.0.25:49141 - 35801 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000589662s
	[INFO] 10.244.0.25:33254 - 12597 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000185747s
	[INFO] 10.244.0.25:42529 - 49710 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000226536s
	[INFO] 10.244.0.25:53656 - 22803 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000130101s
	[INFO] 10.244.0.25:59620 - 40297 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000251545s
	[INFO] 10.244.0.25:37456 - 19181 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.007152876s
	[INFO] 10.244.0.25:49040 - 6923 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.007937705s
	[INFO] 10.244.0.25:57749 - 58328 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00205546s
	[INFO] 10.244.0.25:33131 - 7916 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.002060276s
	
	
	==> describe nodes <==
	Name:               addons-837740
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-837740
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=addons-837740
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T06_30_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-837740
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:30:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-837740
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 06:43:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:39:28 +0000   Sun, 15 Sep 2024 06:30:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:39:28 +0000   Sun, 15 Sep 2024 06:30:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:39:28 +0000   Sun, 15 Sep 2024 06:30:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:39:28 +0000   Sun, 15 Sep 2024 06:30:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-837740
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f74f9ae52494a289584ca577801b569
	  System UUID:                b1422836-2ae3-412b-9023-174332602f9a
	  Boot ID:                    72fc410e-b80c-4eb1-a965-d925e9faaac6
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     cloud-spanner-emulator-769b77f747-f96b4     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  gcp-auth                    gcp-auth-89d5ffd79-4vxbx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-9d94n    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-wglrg                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-837740                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-837740                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-837740       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-vjdxv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-837740                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-tt4ct        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-ht8wp     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-txt6k              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (4%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-837740 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-837740 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-837740 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-837740 event: Registered Node addons-837740 in Controller
	
	
	==> dmesg <==
	[Sep15 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015640] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.462572] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.788358] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.780901] kauditd_printk_skb: 36 callbacks suppressed
	[Sep15 06:33] hrtimer: interrupt took 17557708 ns
	
	
	==> etcd [f3f6b32525e6] <==
	{"level":"info","ts":"2024-09-15T06:30:40.850517Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-15T06:30:40.850538Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-15T06:30:41.682041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-15T06:30:41.682266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-15T06:30:41.682387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-15T06:30:41.682493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-15T06:30:41.682579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-15T06:30:41.682688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-15T06:30:41.682779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-15T06:30:41.686129Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:30:41.690285Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-837740 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T06:30:41.691718Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:30:41.692197Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:30:41.692544Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T06:30:41.692643Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T06:30:41.693528Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:30:41.693777Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:30:41.695531Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:30:41.695584Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:30:41.703567Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T06:30:41.703622Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:30:41.718640Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-15T06:40:42.356221Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1838}
	{"level":"info","ts":"2024-09-15T06:40:42.404320Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1838,"took":"46.880111ms","hash":3365489726,"current-db-size-bytes":8634368,"current-db-size":"8.6 MB","current-db-size-in-use-bytes":4833280,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-15T06:40:42.404379Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3365489726,"revision":1838,"compact-revision":-1}
	
	
	==> gcp-auth [5d3d92bbe7e7] <==
	2024/09/15 06:33:38 GCP Auth Webhook started!
	2024/09/15 06:33:55 Ready to marshal response ...
	2024/09/15 06:33:55 Ready to write response ...
	2024/09/15 06:33:56 Ready to marshal response ...
	2024/09/15 06:33:56 Ready to write response ...
	2024/09/15 06:34:19 Ready to marshal response ...
	2024/09/15 06:34:19 Ready to write response ...
	2024/09/15 06:34:19 Ready to marshal response ...
	2024/09/15 06:34:19 Ready to write response ...
	2024/09/15 06:34:20 Ready to marshal response ...
	2024/09/15 06:34:20 Ready to write response ...
	2024/09/15 06:42:31 Ready to marshal response ...
	2024/09/15 06:42:31 Ready to write response ...
	2024/09/15 06:42:34 Ready to marshal response ...
	2024/09/15 06:42:34 Ready to write response ...
	2024/09/15 06:43:00 Ready to marshal response ...
	2024/09/15 06:43:00 Ready to write response ...
	2024/09/15 06:43:33 Ready to marshal response ...
	2024/09/15 06:43:33 Ready to write response ...
	
	
	==> kernel <==
	 06:43:37 up 26 min,  0 users,  load average: 1.97, 0.92, 0.67
	Linux addons-837740 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [484ea520c5e9] <==
	W0915 06:34:11.074448       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0915 06:34:11.360354       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0915 06:34:11.426508       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0915 06:34:11.593677       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0915 06:34:11.785503       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0915 06:34:11.836913       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0915 06:34:12.238450       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0915 06:42:39.591684       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0915 06:43:16.110927       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:43:16.110983       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:43:16.152000       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:43:16.152055       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:43:16.156280       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:43:16.156397       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:43:16.190665       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:43:16.190808       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:43:16.317804       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:43:16.318280       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0915 06:43:17.157374       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0915 06:43:17.318895       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0915 06:43:17.333488       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0915 06:43:27.693696       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0915 06:43:28.748518       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0915 06:43:33.290453       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0915 06:43:33.605359       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.175.91"}
	
	
	==> kube-controller-manager [3696b00b2455] <==
	I0915 06:43:22.062016       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="21.981µs"
	I0915 06:43:22.829069       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0915 06:43:22.829112       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 06:43:23.256597       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0915 06:43:23.256643       1 shared_informer.go:320] Caches are synced for garbage collector
	W0915 06:43:26.465209       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:43:26.465259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:43:27.051654       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:43:27.051696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:43:27.232416       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:43:27.232461       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0915 06:43:28.750395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:43:29.597549       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:43:29.597593       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:43:32.056704       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:43:32.056747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:43:34.510668       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:43:34.510713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:43:35.045827       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:43:35.045884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:43:35.135593       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="9.108µs"
	W0915 06:43:35.780147       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:43:35.780187       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:43:36.256410       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:43:36.256455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [7e4e2e7c9f9d] <==
	I0915 06:30:54.110522       1 server_linux.go:66] "Using iptables proxy"
	I0915 06:30:54.228484       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0915 06:30:54.228552       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:30:54.271010       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0915 06:30:54.271085       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:30:54.273174       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:30:54.273466       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:30:54.273478       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:30:54.275150       1 config.go:199] "Starting service config controller"
	I0915 06:30:54.275170       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:30:54.275194       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:30:54.275198       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:30:54.275675       1 config.go:328] "Starting node config controller"
	I0915 06:30:54.275682       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:30:54.375589       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 06:30:54.375639       1 shared_informer.go:320] Caches are synced for service config
	I0915 06:30:54.376607       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [35aa9c1536cf] <==
	W0915 06:30:45.488004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 06:30:45.488146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:45.488389       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0915 06:30:45.488476       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 06:30:45.488663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:45.489141       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 06:30:45.489320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:45.489239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 06:30:45.489477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0915 06:30:45.488517       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:46.316465       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 06:30:46.316512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:46.382777       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 06:30:46.382823       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:46.432959       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 06:30:46.433020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:46.477070       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 06:30:46.477123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:46.573484       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 06:30:46.573702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:46.595578       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 06:30:46.595873       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:46.799372       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 06:30:46.799629       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0915 06:30:48.745611       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 15 06:43:33 addons-837740 kubelet[2337]: I0915 06:43:33.542808    2337 memory_manager.go:354] "RemoveStaleState removing state" podUID="18829192-e1c9-489b-adf6-ecbd1ec662c8" containerName="gadget"
	Sep 15 06:43:33 addons-837740 kubelet[2337]: I0915 06:43:33.654134    2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/42398e49-2009-4739-b884-15187314ed39-gcp-creds\") pod \"nginx\" (UID: \"42398e49-2009-4739-b884-15187314ed39\") " pod="default/nginx"
	Sep 15 06:43:33 addons-837740 kubelet[2337]: I0915 06:43:33.654182    2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prrlq\" (UniqueName: \"kubernetes.io/projected/42398e49-2009-4739-b884-15187314ed39-kube-api-access-prrlq\") pod \"nginx\" (UID: \"42398e49-2009-4739-b884-15187314ed39\") " pod="default/nginx"
	Sep 15 06:43:33 addons-837740 kubelet[2337]: E0915 06:43:33.920255    2337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="893b7539-d0c9-4122-bcc7-7fcac741c78e"
	Sep 15 06:43:34 addons-837740 kubelet[2337]: I0915 06:43:34.670769    2337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqj9c\" (UniqueName: \"kubernetes.io/projected/d9c2778a-a5ba-42cc-9f8d-38d41f1a3121-kube-api-access-gqj9c\") pod \"d9c2778a-a5ba-42cc-9f8d-38d41f1a3121\" (UID: \"d9c2778a-a5ba-42cc-9f8d-38d41f1a3121\") "
	Sep 15 06:43:34 addons-837740 kubelet[2337]: I0915 06:43:34.670828    2337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d9c2778a-a5ba-42cc-9f8d-38d41f1a3121-gcp-creds\") pod \"d9c2778a-a5ba-42cc-9f8d-38d41f1a3121\" (UID: \"d9c2778a-a5ba-42cc-9f8d-38d41f1a3121\") "
	Sep 15 06:43:34 addons-837740 kubelet[2337]: I0915 06:43:34.670931    2337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9c2778a-a5ba-42cc-9f8d-38d41f1a3121-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "d9c2778a-a5ba-42cc-9f8d-38d41f1a3121" (UID: "d9c2778a-a5ba-42cc-9f8d-38d41f1a3121"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 15 06:43:34 addons-837740 kubelet[2337]: I0915 06:43:34.672762    2337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c2778a-a5ba-42cc-9f8d-38d41f1a3121-kube-api-access-gqj9c" (OuterVolumeSpecName: "kube-api-access-gqj9c") pod "d9c2778a-a5ba-42cc-9f8d-38d41f1a3121" (UID: "d9c2778a-a5ba-42cc-9f8d-38d41f1a3121"). InnerVolumeSpecName "kube-api-access-gqj9c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:43:34 addons-837740 kubelet[2337]: I0915 06:43:34.772744    2337 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gqj9c\" (UniqueName: \"kubernetes.io/projected/d9c2778a-a5ba-42cc-9f8d-38d41f1a3121-kube-api-access-gqj9c\") on node \"addons-837740\" DevicePath \"\""
	Sep 15 06:43:34 addons-837740 kubelet[2337]: I0915 06:43:34.772775    2337 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d9c2778a-a5ba-42cc-9f8d-38d41f1a3121-gcp-creds\") on node \"addons-837740\" DevicePath \"\""
	Sep 15 06:43:35 addons-837740 kubelet[2337]: I0915 06:43:35.884136    2337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvgxx\" (UniqueName: \"kubernetes.io/projected/1a2130f7-6cbe-4a8b-bea3-e3e4436003d2-kube-api-access-wvgxx\") pod \"1a2130f7-6cbe-4a8b-bea3-e3e4436003d2\" (UID: \"1a2130f7-6cbe-4a8b-bea3-e3e4436003d2\") "
	Sep 15 06:43:35 addons-837740 kubelet[2337]: I0915 06:43:35.890011    2337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a2130f7-6cbe-4a8b-bea3-e3e4436003d2-kube-api-access-wvgxx" (OuterVolumeSpecName: "kube-api-access-wvgxx") pod "1a2130f7-6cbe-4a8b-bea3-e3e4436003d2" (UID: "1a2130f7-6cbe-4a8b-bea3-e3e4436003d2"). InnerVolumeSpecName "kube-api-access-wvgxx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:43:35 addons-837740 kubelet[2337]: I0915 06:43:35.918341    2337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9c2778a-a5ba-42cc-9f8d-38d41f1a3121" path="/var/lib/kubelet/pods/d9c2778a-a5ba-42cc-9f8d-38d41f1a3121/volumes"
	Sep 15 06:43:35 addons-837740 kubelet[2337]: I0915 06:43:35.984783    2337 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wvgxx\" (UniqueName: \"kubernetes.io/projected/1a2130f7-6cbe-4a8b-bea3-e3e4436003d2-kube-api-access-wvgxx\") on node \"addons-837740\" DevicePath \"\""
	Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.086022    2337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdl67\" (UniqueName: \"kubernetes.io/projected/53474271-c9f2-4050-bf68-df5e1935aa85-kube-api-access-kdl67\") pod \"53474271-c9f2-4050-bf68-df5e1935aa85\" (UID: \"53474271-c9f2-4050-bf68-df5e1935aa85\") "
	Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.088791    2337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53474271-c9f2-4050-bf68-df5e1935aa85-kube-api-access-kdl67" (OuterVolumeSpecName: "kube-api-access-kdl67") pod "53474271-c9f2-4050-bf68-df5e1935aa85" (UID: "53474271-c9f2-4050-bf68-df5e1935aa85"). InnerVolumeSpecName "kube-api-access-kdl67". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.186662    2337 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kdl67\" (UniqueName: \"kubernetes.io/projected/53474271-c9f2-4050-bf68-df5e1935aa85-kube-api-access-kdl67\") on node \"addons-837740\" DevicePath \"\""
	Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.269567    2337 scope.go:117] "RemoveContainer" containerID="3b04ae659313ca8907015471a4a2a448ca2db0bbd685d3d461b94db188a79b3c"
	Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.322097    2337 scope.go:117] "RemoveContainer" containerID="3b04ae659313ca8907015471a4a2a448ca2db0bbd685d3d461b94db188a79b3c"
	Sep 15 06:43:36 addons-837740 kubelet[2337]: E0915 06:43:36.323290    2337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 3b04ae659313ca8907015471a4a2a448ca2db0bbd685d3d461b94db188a79b3c" containerID="3b04ae659313ca8907015471a4a2a448ca2db0bbd685d3d461b94db188a79b3c"
	Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.323335    2337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"3b04ae659313ca8907015471a4a2a448ca2db0bbd685d3d461b94db188a79b3c"} err="failed to get container status \"3b04ae659313ca8907015471a4a2a448ca2db0bbd685d3d461b94db188a79b3c\": rpc error: code = Unknown desc = Error response from daemon: No such container: 3b04ae659313ca8907015471a4a2a448ca2db0bbd685d3d461b94db188a79b3c"
	Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.323362    2337 scope.go:117] "RemoveContainer" containerID="8bd916583c0a3ff5ec6e87cbe9622deabdbc17a9a77d70ee134c30580e20fb78"
	Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.346041    2337 scope.go:117] "RemoveContainer" containerID="8bd916583c0a3ff5ec6e87cbe9622deabdbc17a9a77d70ee134c30580e20fb78"
	Sep 15 06:43:36 addons-837740 kubelet[2337]: E0915 06:43:36.347177    2337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 8bd916583c0a3ff5ec6e87cbe9622deabdbc17a9a77d70ee134c30580e20fb78" containerID="8bd916583c0a3ff5ec6e87cbe9622deabdbc17a9a77d70ee134c30580e20fb78"
	Sep 15 06:43:36 addons-837740 kubelet[2337]: I0915 06:43:36.347224    2337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8bd916583c0a3ff5ec6e87cbe9622deabdbc17a9a77d70ee134c30580e20fb78"} err="failed to get container status \"8bd916583c0a3ff5ec6e87cbe9622deabdbc17a9a77d70ee134c30580e20fb78\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8bd916583c0a3ff5ec6e87cbe9622deabdbc17a9a77d70ee134c30580e20fb78"
	
	
	==> storage-provisioner [a2553db37f09] <==
	I0915 06:31:00.001266       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:31:00.020062       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:31:00.020131       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:31:00.031775       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:31:00.032198       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ab47c2fe-346c-4436-a442-df209c167d0c", APIVersion:"v1", ResourceVersion:"551", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-837740_6e965af9-fe12-4e11-afb0-95e4c4520e62 became leader
	I0915 06:31:00.032234       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-837740_6e965af9-fe12-4e11-afb0-95e4c4520e62!
	I0915 06:31:00.133248       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-837740_6e965af9-fe12-4e11-afb0-95e4c4520e62!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-837740 -n addons-837740
helpers_test.go:261: (dbg) Run:  kubectl --context addons-837740 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-7wph7 ingress-nginx-admission-patch-tm9xd
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-837740 describe pod busybox ingress-nginx-admission-create-7wph7 ingress-nginx-admission-patch-tm9xd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-837740 describe pod busybox ingress-nginx-admission-create-7wph7 ingress-nginx-admission-patch-tm9xd: exit status 1 (104.928313ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-837740/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:34:20 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l6wcd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-l6wcd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m18s                   default-scheduler  Successfully assigned default/busybox to addons-837740
	  Normal   Pulling    7m54s (x4 over 9m18s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m53s (x4 over 9m18s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m53s (x4 over 9m18s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m42s (x6 over 9m17s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m14s (x21 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7wph7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tm9xd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-837740 describe pod busybox ingress-nginx-admission-create-7wph7 ingress-nginx-admission-patch-tm9xd: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.99s)

                                                
                                    

Test pass (318/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.97
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 5.02
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
22 TestOffline 57.96
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 224.06
29 TestAddons/serial/Volcano 40.12
31 TestAddons/serial/GCPAuth/Namespaces 0.19
34 TestAddons/parallel/Ingress 20.54
35 TestAddons/parallel/InspektorGadget 10.74
36 TestAddons/parallel/MetricsServer 5.72
39 TestAddons/parallel/CSI 53.44
40 TestAddons/parallel/Headlamp 15.58
41 TestAddons/parallel/CloudSpanner 5.53
42 TestAddons/parallel/LocalPath 53.43
43 TestAddons/parallel/NvidiaDevicePlugin 5.47
44 TestAddons/parallel/Yakd 11.71
45 TestAddons/StoppedEnableDisable 11.2
46 TestCertOptions 35.52
47 TestCertExpiration 252.15
48 TestDockerFlags 39.61
49 TestForceSystemdFlag 54.02
50 TestForceSystemdEnv 41.04
56 TestErrorSpam/setup 32.8
57 TestErrorSpam/start 0.71
58 TestErrorSpam/status 1.03
59 TestErrorSpam/pause 1.44
60 TestErrorSpam/unpause 1.54
61 TestErrorSpam/stop 11.03
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 45.27
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 34.06
68 TestFunctional/serial/KubeContext 0.1
69 TestFunctional/serial/KubectlGetPods 0.11
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.27
73 TestFunctional/serial/CacheCmd/cache/add_local 0.91
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.69
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.61
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
81 TestFunctional/serial/ExtraConfig 40.99
82 TestFunctional/serial/ComponentHealth 0.09
83 TestFunctional/serial/LogsCmd 1.19
84 TestFunctional/serial/LogsFileCmd 1.49
85 TestFunctional/serial/InvalidService 4.49
87 TestFunctional/parallel/ConfigCmd 0.45
88 TestFunctional/parallel/DashboardCmd 11.05
89 TestFunctional/parallel/DryRun 0.57
90 TestFunctional/parallel/InternationalLanguage 0.23
91 TestFunctional/parallel/StatusCmd 1.19
95 TestFunctional/parallel/ServiceCmdConnect 11.73
96 TestFunctional/parallel/AddonsCmd 0.19
97 TestFunctional/parallel/PersistentVolumeClaim 28.22
99 TestFunctional/parallel/SSHCmd 0.73
100 TestFunctional/parallel/CpCmd 2.47
102 TestFunctional/parallel/FileSync 0.37
103 TestFunctional/parallel/CertSync 2.06
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.29
111 TestFunctional/parallel/License 0.24
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.46
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.24
124 TestFunctional/parallel/ServiceCmd/List 0.52
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.63
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.57
128 TestFunctional/parallel/ServiceCmd/Format 0.53
129 TestFunctional/parallel/ProfileCmd/profile_list 0.56
130 TestFunctional/parallel/ServiceCmd/URL 0.49
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.54
132 TestFunctional/parallel/MountCmd/any-port 8.66
133 TestFunctional/parallel/MountCmd/specific-port 1.38
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.92
135 TestFunctional/parallel/Version/short 0.08
136 TestFunctional/parallel/Version/components 1.14
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.1
142 TestFunctional/parallel/ImageCommands/Setup 0.69
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.26
146 TestFunctional/parallel/DockerEnv/bash 1.29
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.24
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.93
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.21
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.44
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.73
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.52
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 123.47
161 TestMultiControlPlane/serial/DeployApp 10.98
162 TestMultiControlPlane/serial/PingHostFromPods 1.67
163 TestMultiControlPlane/serial/AddWorkerNode 23.83
164 TestMultiControlPlane/serial/NodeLabels 0.12
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.8
166 TestMultiControlPlane/serial/CopyFile 19.95
167 TestMultiControlPlane/serial/StopSecondaryNode 12.27
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.63
169 TestMultiControlPlane/serial/RestartSecondaryNode 64.2
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.79
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 256.37
172 TestMultiControlPlane/serial/DeleteSecondaryNode 11.54
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.55
174 TestMultiControlPlane/serial/StopCluster 32.78
175 TestMultiControlPlane/serial/RestartCluster 96.37
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
177 TestMultiControlPlane/serial/AddSecondaryNode 47.59
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.79
181 TestImageBuild/serial/Setup 31.92
182 TestImageBuild/serial/NormalBuild 1.78
183 TestImageBuild/serial/BuildWithBuildArg 0.98
184 TestImageBuild/serial/BuildWithDockerIgnore 0.85
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.81
189 TestJSONOutput/start/Command 74.9
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.59
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.52
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.85
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.22
214 TestKicCustomNetwork/create_custom_network 33.46
215 TestKicCustomNetwork/use_default_bridge_network 32.36
216 TestKicExistingNetwork 31.51
217 TestKicCustomSubnet 33.46
218 TestKicStaticIP 32.79
219 TestMainNoArgs 0.05
220 TestMinikubeProfile 71.51
223 TestMountStart/serial/StartWithMountFirst 7.93
224 TestMountStart/serial/VerifyMountFirst 0.27
225 TestMountStart/serial/StartWithMountSecond 8.32
226 TestMountStart/serial/VerifyMountSecond 0.26
227 TestMountStart/serial/DeleteFirst 1.46
228 TestMountStart/serial/VerifyMountPostDelete 0.26
229 TestMountStart/serial/Stop 1.22
230 TestMountStart/serial/RestartStopped 8.21
231 TestMountStart/serial/VerifyMountPostStop 0.26
234 TestMultiNode/serial/FreshStart2Nodes 73.42
235 TestMultiNode/serial/DeployApp2Nodes 43.32
236 TestMultiNode/serial/PingHostFrom2Pods 1.02
237 TestMultiNode/serial/AddNode 17.69
238 TestMultiNode/serial/MultiNodeLabels 0.09
239 TestMultiNode/serial/ProfileList 0.35
240 TestMultiNode/serial/CopyFile 10.18
241 TestMultiNode/serial/StopNode 2.26
242 TestMultiNode/serial/StartAfterStop 11.49
243 TestMultiNode/serial/RestartKeepsNodes 98.97
244 TestMultiNode/serial/DeleteNode 5.72
245 TestMultiNode/serial/StopMultiNode 21.54
246 TestMultiNode/serial/RestartMultiNode 51.04
247 TestMultiNode/serial/ValidateNameConflict 34.77
252 TestPreload 114.49
254 TestScheduledStopUnix 104.24
255 TestSkaffold 121.91
257 TestInsufficientStorage 11.1
258 TestRunningBinaryUpgrade 87.87
260 TestKubernetesUpgrade 125.23
261 TestMissingContainerUpgrade 170.35
263 TestPause/serial/Start 82.62
264 TestPause/serial/SecondStartNoReconfiguration 35.99
265 TestPause/serial/Pause 0.78
266 TestPause/serial/VerifyStatus 0.38
267 TestPause/serial/Unpause 0.69
268 TestPause/serial/PauseAgain 0.87
269 TestPause/serial/DeletePaused 2.31
270 TestPause/serial/VerifyDeletedResources 0.48
271 TestStoppedBinaryUpgrade/Setup 0.59
272 TestStoppedBinaryUpgrade/Upgrade 111.98
280 TestStoppedBinaryUpgrade/MinikubeLogs 2.57
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
283 TestNoKubernetes/serial/StartWithK8s 37.8
284 TestNoKubernetes/serial/StartWithStopK8s 20.7
296 TestNoKubernetes/serial/Start 8.53
297 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
298 TestNoKubernetes/serial/ProfileList 1.16
299 TestNoKubernetes/serial/Stop 1.23
300 TestNoKubernetes/serial/StartNoArgs 8.78
301 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.42
303 TestStartStop/group/old-k8s-version/serial/FirstStart 134.73
304 TestStartStop/group/old-k8s-version/serial/DeployApp 9.55
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.19
306 TestStartStop/group/old-k8s-version/serial/Stop 11.7
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.49
308 TestStartStop/group/old-k8s-version/serial/SecondStart 145.31
310 TestStartStop/group/no-preload/serial/FirstStart 62.03
311 TestStartStop/group/no-preload/serial/DeployApp 9.4
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.04
313 TestStartStop/group/no-preload/serial/Stop 11.42
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
315 TestStartStop/group/no-preload/serial/SecondStart 267.21
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
319 TestStartStop/group/old-k8s-version/serial/Pause 2.73
321 TestStartStop/group/embed-certs/serial/FirstStart 76.7
322 TestStartStop/group/embed-certs/serial/DeployApp 10.35
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
324 TestStartStop/group/embed-certs/serial/Stop 10.97
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
326 TestStartStop/group/embed-certs/serial/SecondStart 289.15
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
328 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
329 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
330 TestStartStop/group/no-preload/serial/Pause 2.88
332 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 68.75
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.4
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
335 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.8
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
337 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.42
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
341 TestStartStop/group/embed-certs/serial/Pause 2.86
343 TestStartStop/group/newest-cni/serial/FirstStart 40.2
344 TestStartStop/group/newest-cni/serial/DeployApp 0
345 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.36
346 TestStartStop/group/newest-cni/serial/Stop 10.14
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
348 TestStartStop/group/newest-cni/serial/SecondStart 18.46
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
352 TestStartStop/group/newest-cni/serial/Pause 2.89
353 TestNetworkPlugins/group/auto/Start 72.59
354 TestNetworkPlugins/group/auto/KubeletFlags 0.3
355 TestNetworkPlugins/group/auto/NetCatPod 9.34
356 TestNetworkPlugins/group/auto/DNS 0.19
357 TestNetworkPlugins/group/auto/Localhost 0.17
358 TestNetworkPlugins/group/auto/HairPin 0.17
359 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.17
361 TestNetworkPlugins/group/kindnet/Start 79.29
362 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
363 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.78
364 TestNetworkPlugins/group/calico/Start 80.92
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
367 TestNetworkPlugins/group/kindnet/NetCatPod 10.31
368 TestNetworkPlugins/group/calico/ControllerPod 6.01
369 TestNetworkPlugins/group/calico/KubeletFlags 0.27
370 TestNetworkPlugins/group/calico/NetCatPod 10.32
371 TestNetworkPlugins/group/kindnet/DNS 0.26
372 TestNetworkPlugins/group/kindnet/Localhost 0.21
373 TestNetworkPlugins/group/kindnet/HairPin 0.23
374 TestNetworkPlugins/group/calico/DNS 0.35
375 TestNetworkPlugins/group/calico/Localhost 0.24
376 TestNetworkPlugins/group/calico/HairPin 0.25
377 TestNetworkPlugins/group/custom-flannel/Start 60.11
378 TestNetworkPlugins/group/false/Start 56.93
379 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
380 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.42
381 TestNetworkPlugins/group/false/KubeletFlags 0.32
382 TestNetworkPlugins/group/false/NetCatPod 10.28
383 TestNetworkPlugins/group/custom-flannel/DNS 0.3
384 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
385 TestNetworkPlugins/group/custom-flannel/HairPin 0.25
386 TestNetworkPlugins/group/false/DNS 0.29
387 TestNetworkPlugins/group/false/Localhost 0.25
388 TestNetworkPlugins/group/false/HairPin 0.28
389 TestNetworkPlugins/group/enable-default-cni/Start 59.32
390 TestNetworkPlugins/group/flannel/Start 61.62
391 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.43
392 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.4
393 TestNetworkPlugins/group/flannel/ControllerPod 6.01
394 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
395 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
396 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
397 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
398 TestNetworkPlugins/group/flannel/NetCatPod 10.28
399 TestNetworkPlugins/group/flannel/DNS 0.5
400 TestNetworkPlugins/group/flannel/Localhost 0.26
401 TestNetworkPlugins/group/flannel/HairPin 0.37
402 TestNetworkPlugins/group/bridge/Start 82.37
403 TestNetworkPlugins/group/kubenet/Start 51.34
404 TestNetworkPlugins/group/kubenet/KubeletFlags 0.31
405 TestNetworkPlugins/group/kubenet/NetCatPod 10.29
406 TestNetworkPlugins/group/kubenet/DNS 0.19
407 TestNetworkPlugins/group/kubenet/Localhost 0.22
408 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
409 TestNetworkPlugins/group/kubenet/HairPin 0.23
410 TestNetworkPlugins/group/bridge/NetCatPod 9.46
411 TestNetworkPlugins/group/bridge/DNS 0.24
412 TestNetworkPlugins/group/bridge/Localhost 0.23
413 TestNetworkPlugins/group/bridge/HairPin 0.3
x
+
TestDownloadOnly/v1.20.0/json-events (13.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-221568 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-221568 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (13.973664437s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (13.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-221568
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-221568: exit status 85 (66.933946ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-221568 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |          |
	|         | -p download-only-221568        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:29:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:29:34.235559    7673 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:29:34.236018    7673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:34.236033    7673 out.go:358] Setting ErrFile to fd 2...
	I0915 06:29:34.236039    7673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:34.236293    7673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2359/.minikube/bin
	W0915 06:29:34.236429    7673 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19644-2359/.minikube/config/config.json: open /home/jenkins/minikube-integration/19644-2359/.minikube/config/config.json: no such file or directory
	I0915 06:29:34.236837    7673 out.go:352] Setting JSON to true
	I0915 06:29:34.237674    7673 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":726,"bootTime":1726381048,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0915 06:29:34.237750    7673 start.go:139] virtualization:  
	I0915 06:29:34.241461    7673 out.go:97] [download-only-221568] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0915 06:29:34.241635    7673 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19644-2359/.minikube/cache/preloaded-tarball: no such file or directory
	I0915 06:29:34.241738    7673 notify.go:220] Checking for updates...
	I0915 06:29:34.244608    7673 out.go:169] MINIKUBE_LOCATION=19644
	I0915 06:29:34.247031    7673 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:29:34.249308    7673 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19644-2359/kubeconfig
	I0915 06:29:34.251550    7673 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2359/.minikube
	I0915 06:29:34.253819    7673 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0915 06:29:34.257870    7673 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0915 06:29:34.258289    7673 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:29:34.292305    7673 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:29:34.292403    7673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:34.610328    7673 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-15 06:29:34.600767963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:29:34.610434    7673 docker.go:318] overlay module found
	I0915 06:29:34.612707    7673 out.go:97] Using the docker driver based on user configuration
	I0915 06:29:34.612731    7673 start.go:297] selected driver: docker
	I0915 06:29:34.612737    7673 start.go:901] validating driver "docker" against <nil>
	I0915 06:29:34.612835    7673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:34.666064    7673 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-15 06:29:34.657061094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:29:34.666274    7673 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:29:34.666568    7673 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0915 06:29:34.666729    7673 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 06:29:34.669232    7673 out.go:169] Using Docker driver with root privileges
	I0915 06:29:34.671444    7673 cni.go:84] Creating CNI manager for ""
	I0915 06:29:34.671510    7673 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0915 06:29:34.671590    7673 start.go:340] cluster config:
	{Name:download-only-221568 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-221568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:29:34.674118    7673 out.go:97] Starting "download-only-221568" primary control-plane node in "download-only-221568" cluster
	I0915 06:29:34.674137    7673 cache.go:121] Beginning downloading kic base image for docker with docker
	I0915 06:29:34.676187    7673 out.go:97] Pulling base image v0.0.45-1726358845-19644 ...
	I0915 06:29:34.676208    7673 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0915 06:29:34.676305    7673 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 06:29:34.691654    7673 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:29:34.691833    7673 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 06:29:34.691939    7673 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:29:34.734741    7673 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0915 06:29:34.734766    7673 cache.go:56] Caching tarball of preloaded images
	I0915 06:29:34.734906    7673 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0915 06:29:34.737430    7673 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0915 06:29:34.737457    7673 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0915 06:29:34.822520    7673 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19644-2359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0915 06:29:39.059596    7673 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0915 06:29:39.059870    7673 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19644-2359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0915 06:29:40.094580    7673 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0915 06:29:40.095011    7673 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/download-only-221568/config.json ...
	I0915 06:29:40.095049    7673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/download-only-221568/config.json: {Name:mk9160cd9fe9c414bfe431f25bdcec645b97f867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:29:40.095242    7673 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0915 06:29:40.095430    7673 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19644-2359/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-221568 host does not exist
	  To start a cluster, run: "minikube start -p download-only-221568"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-221568
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-157916 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-157916 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.020534764s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-157916
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-157916: exit status 85 (78.989356ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-221568 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | -p download-only-221568        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| delete  | -p download-only-221568        | download-only-221568 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| start   | -o=json --download-only        | download-only-157916 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | -p download-only-157916        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:29:48
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:29:48.609307    7872 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:29:48.609480    7872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:48.609491    7872 out.go:358] Setting ErrFile to fd 2...
	I0915 06:29:48.609495    7872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:48.609733    7872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2359/.minikube/bin
	I0915 06:29:48.610171    7872 out.go:352] Setting JSON to true
	I0915 06:29:48.610917    7872 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":741,"bootTime":1726381048,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0915 06:29:48.610999    7872 start.go:139] virtualization:  
	I0915 06:29:48.613586    7872 out.go:97] [download-only-157916] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0915 06:29:48.613751    7872 notify.go:220] Checking for updates...
	I0915 06:29:48.615636    7872 out.go:169] MINIKUBE_LOCATION=19644
	I0915 06:29:48.618094    7872 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:29:48.620284    7872 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19644-2359/kubeconfig
	I0915 06:29:48.622375    7872 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2359/.minikube
	I0915 06:29:48.624577    7872 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0915 06:29:48.628526    7872 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0915 06:29:48.628810    7872 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:29:48.656611    7872 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:29:48.656715    7872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:48.720052    7872 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:29:48.710781744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:29:48.720160    7872 docker.go:318] overlay module found
	I0915 06:29:48.722271    7872 out.go:97] Using the docker driver based on user configuration
	I0915 06:29:48.722309    7872 start.go:297] selected driver: docker
	I0915 06:29:48.722315    7872 start.go:901] validating driver "docker" against <nil>
	I0915 06:29:48.722422    7872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:48.785400    7872 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:29:48.77392198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:29:48.785648    7872 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:29:48.786026    7872 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0915 06:29:48.786253    7872 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 06:29:48.788808    7872 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-157916 host does not exist
	  To start a cluster, run: "minikube start -p download-only-157916"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-157916
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-730073 --alsologtostderr --binary-mirror http://127.0.0.1:35331 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-730073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-730073
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (57.96s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-686378 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-686378 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (55.815917003s)
helpers_test.go:175: Cleaning up "offline-docker-686378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-686378
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-686378: (2.1394573s)
--- PASS: TestOffline (57.96s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-837740
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-837740: exit status 85 (64.714955ms)

                                                
                                                
-- stdout --
	* Profile "addons-837740" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-837740"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-837740
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-837740: exit status 85 (61.160571ms)

                                                
                                                
-- stdout --
	* Profile "addons-837740" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-837740"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (224.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-837740 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-837740 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m44.058611391s)
--- PASS: TestAddons/Setup (224.06s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.12s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 80.672326ms
addons_test.go:897: volcano-scheduler stabilized in 80.844224ms
addons_test.go:905: volcano-admission stabilized in 81.595404ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-hmcld" [dfe0cea6-4327-4faa-9858-6f03bf00163e] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.00589312s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-rfgkb" [33f302a5-6547-4ef0-a15f-830d30f1990d] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003879636s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-jmcgm" [67a48bf2-59cc-4cdd-b0a2-13b044c85c57] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.006146023s
addons_test.go:932: (dbg) Run:  kubectl --context addons-837740 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-837740 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-837740 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [7bb5d418-c4c4-4837-b701-1cbc6c3435f0] Pending
helpers_test.go:344: "test-job-nginx-0" [7bb5d418-c4c4-4837-b701-1cbc6c3435f0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [7bb5d418-c4c4-4837-b701-1cbc6c3435f0] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004389829s
addons_test.go:968: (dbg) Run:  out/minikube-linux-arm64 -p addons-837740 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-arm64 -p addons-837740 addons disable volcano --alsologtostderr -v=1: (10.422662586s)
--- PASS: TestAddons/serial/Volcano (40.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-837740 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-837740 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-837740 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-837740 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-837740 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [42398e49-2009-4739-b884-15187314ed39] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [42398e49-2009-4739-b884-15187314ed39] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003837915s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-837740 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-837740 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-837740 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-837740 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-837740 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-837740 addons disable ingress --alsologtostderr -v=1: (7.847417489s)
--- PASS: TestAddons/parallel/Ingress (20.54s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-twbl8" [18829192-e1c9-489b-adf6-ecbd1ec662c8] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00454947s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-837740
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-837740: (5.737705094s)
--- PASS: TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.685935ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-bgbxc" [f10bfbc8-7858-4a49-9947-c358eaefb7b2] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004488958s
addons_test.go:417: (dbg) Run:  kubectl --context addons-837740 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-837740 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.72s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.755971ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-837740 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-837740 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [37cbb459-f6e4-417f-849f-78d71a062c6e] Pending
helpers_test.go:344: "task-pv-pod" [37cbb459-f6e4-417f-849f-78d71a062c6e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [37cbb459-f6e4-417f-849f-78d71a062c6e] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004097371s
addons_test.go:590: (dbg) Run:  kubectl --context addons-837740 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-837740 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-837740 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-837740 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-837740 delete pod task-pv-pod: (1.386625243s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-837740 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-837740 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-837740 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [006de8fd-7632-43e0-b3bc-d107c5c082bb] Pending
helpers_test.go:344: "task-pv-pod-restore" [006de8fd-7632-43e0-b3bc-d107c5c082bb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [006de8fd-7632-43e0-b3bc-d107c5c082bb] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003738277s
addons_test.go:632: (dbg) Run:  kubectl --context addons-837740 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-837740 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-837740 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-837740 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-837740 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.654285115s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-837740 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.44s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-837740 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-l96sc" [6cc023bf-b0b8-4daf-9fd7-3248e53ebbcb] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-l96sc" [6cc023bf-b0b8-4daf-9fd7-3248e53ebbcb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-l96sc" [6cc023bf-b0b8-4daf-9fd7-3248e53ebbcb] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.005240969s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-837740 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-837740 addons disable headlamp --alsologtostderr -v=1: (5.669665896s)
--- PASS: TestAddons/parallel/Headlamp (15.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-f96b4" [7ceb4035-b36a-4b1f-b9c0-01076e7ccfec] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004358075s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-837740
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.43s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-837740 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-837740 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-837740 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [96ad8d18-5caa-40a1-9567-eca4d03c176a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [96ad8d18-5caa-40a1-9567-eca4d03c176a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [96ad8d18-5caa-40a1-9567-eca4d03c176a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004687647s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-837740 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-837740 ssh "cat /opt/local-path-provisioner/pvc-bdf2da06-3f51-484b-b532-487a0825837e_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-837740 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-837740 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-837740 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-837740 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.344817193s)
--- PASS: TestAddons/parallel/LocalPath (53.43s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-tt4ct" [201ece5f-7d16-40c8-b54a-2afc0f9b1595] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004149245s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-837740
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-txt6k" [f33aea63-5625-4605-a182-eb06b23ad7cd] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003212063s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-837740 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-837740 addons disable yakd --alsologtostderr -v=1: (5.702285634s)
--- PASS: TestAddons/parallel/Yakd (11.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-837740
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-837740: (10.950964398s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-837740
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-837740
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-837740
--- PASS: TestAddons/StoppedEnableDisable (11.20s)

                                                
                                    
x
+
TestCertOptions (35.52s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-355315 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-355315 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (32.717202899s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-355315 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-355315 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-355315 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-355315" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-355315
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-355315: (2.122508884s)
--- PASS: TestCertOptions (35.52s)

                                                
                                    
x
+
TestCertExpiration (252.15s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-520157 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-520157 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (43.276715411s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-520157 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-520157 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (26.533451006s)
helpers_test.go:175: Cleaning up "cert-expiration-520157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-520157
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-520157: (2.34114329s)
--- PASS: TestCertExpiration (252.15s)

                                                
                                    
x
+
TestDockerFlags (39.61s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-580499 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0915 07:26:37.029444    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-580499 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (36.790786037s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-580499 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-580499 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-580499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-580499
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-580499: (2.136213509s)
--- PASS: TestDockerFlags (39.61s)

                                                
                                    
x
+
TestForceSystemdFlag (54.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-584666 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-584666 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (51.301160807s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-584666 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-584666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-584666
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-584666: (2.18041292s)
--- PASS: TestForceSystemdFlag (54.02s)

                                                
                                    
x
+
TestForceSystemdEnv (41.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-838035 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0915 07:25:55.797163    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-838035 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (37.936528587s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-838035 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-838035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-838035
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-838035: (2.443978333s)
--- PASS: TestForceSystemdEnv (41.04s)

                                                
                                    
x
+
TestErrorSpam/setup (32.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-023367 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-023367 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-023367 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-023367 --driver=docker  --container-runtime=docker: (32.795507028s)
--- PASS: TestErrorSpam/setup (32.80s)

                                                
                                    
x
+
TestErrorSpam/start (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-023367 --log_dir /tmp/nospam-023367 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-023367 --log_dir /tmp/nospam-023367 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-023367 --log_dir /tmp/nospam-023367 start --dry-run
--- PASS: TestErrorSpam/start (0.71s)

                                                
                                    
x
+
TestErrorSpam/status (1.03s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-023367 --log_dir /tmp/nospam-023367 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-023367 --log_dir /tmp/nospam-023367 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-023367 --log_dir /tmp/nospam-023367 status
--- PASS: TestErrorSpam/status (1.03s)

                                                
                                    
x
+
TestErrorSpam/pause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-023367 --log_dir /tmp/nospam-023367 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-023367 --log_dir /tmp/nospam-023367 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-023367 --log_dir /tmp/nospam-023367 pause
--- PASS: TestErrorSpam/pause (1.44s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-023367 --log_dir /tmp/nospam-023367 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-023367 --log_dir /tmp/nospam-023367 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-023367 --log_dir /tmp/nospam-023367 unpause
--- PASS: TestErrorSpam/unpause (1.54s)

                                                
                                    
x
+
TestErrorSpam/stop (11.03s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-023367 --log_dir /tmp/nospam-023367 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-023367 --log_dir /tmp/nospam-023367 stop: (10.842201119s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-023367 --log_dir /tmp/nospam-023367 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-023367 --log_dir /tmp/nospam-023367 stop
--- PASS: TestErrorSpam/stop (11.03s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19644-2359/.minikube/files/etc/test/nested/copy/7668/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-175112 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-175112 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (45.272983112s)
--- PASS: TestFunctional/serial/StartWithProxy (45.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-175112 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-175112 --alsologtostderr -v=8: (34.05915822s)
functional_test.go:663: soft start took 34.064029615s for "functional-175112" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.10s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-175112 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-175112 cache add registry.k8s.io/pause:3.1: (1.084506835s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-175112 cache add registry.k8s.io/pause:3.3: (1.19099195s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-175112 /tmp/TestFunctionalserialCacheCmdcacheadd_local4076499529/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 cache add minikube-local-cache-test:functional-175112
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 cache delete minikube-local-cache-test:functional-175112
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-175112
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-175112 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (302.005091ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 kubectl -- --context functional-175112 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-175112 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.99s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-175112 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-175112 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.978453563s)
functional_test.go:761: restart took 40.97855671s for "functional-175112" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.99s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-175112 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-175112 logs: (1.186156605s)
--- PASS: TestFunctional/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 logs --file /tmp/TestFunctionalserialLogsFileCmd1758000405/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-175112 logs --file /tmp/TestFunctionalserialLogsFileCmd1758000405/001/logs.txt: (1.493464918s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.49s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-175112 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-175112
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-175112: exit status 115 (601.027828ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31054 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-175112 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-175112 config get cpus: exit status 14 (77.194752ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-175112 config get cpus: exit status 14 (73.114321ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-175112 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-175112 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 49014: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.05s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-175112 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-175112 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (255.507259ms)

                                                
                                                
-- stdout --
	* [functional-175112] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-2359/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2359/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 06:48:24.832685   48589 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:48:24.833324   48589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:48:24.833345   48589 out.go:358] Setting ErrFile to fd 2...
	I0915 06:48:24.833783   48589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:48:24.834496   48589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2359/.minikube/bin
	I0915 06:48:24.835344   48589 out.go:352] Setting JSON to false
	I0915 06:48:24.838451   48589 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1857,"bootTime":1726381048,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0915 06:48:24.838555   48589 start.go:139] virtualization:  
	I0915 06:48:24.841331   48589 out.go:177] * [functional-175112] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0915 06:48:24.843505   48589 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:48:24.843713   48589 notify.go:220] Checking for updates...
	I0915 06:48:24.847847   48589 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:48:24.849645   48589 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-2359/kubeconfig
	I0915 06:48:24.851592   48589 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2359/.minikube
	I0915 06:48:24.853625   48589 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0915 06:48:24.855652   48589 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:48:24.858103   48589 config.go:182] Loaded profile config "functional-175112": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:48:24.858651   48589 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:48:24.896896   48589 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:48:24.897013   48589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:48:24.971316   48589 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-15 06:48:24.959884619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:48:24.971429   48589 docker.go:318] overlay module found
	I0915 06:48:24.973399   48589 out.go:177] * Using the docker driver based on existing profile
	I0915 06:48:24.975184   48589 start.go:297] selected driver: docker
	I0915 06:48:24.975205   48589 start.go:901] validating driver "docker" against &{Name:functional-175112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-175112 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:48:24.975316   48589 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:48:24.977906   48589 out.go:201] 
	W0915 06:48:24.979651   48589 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0915 06:48:24.981550   48589 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-175112 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-175112 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-175112 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (234.576474ms)

                                                
                                                
-- stdout --
	* [functional-175112] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-2359/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2359/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 06:48:24.554866   48507 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:48:24.555032   48507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:48:24.555044   48507 out.go:358] Setting ErrFile to fd 2...
	I0915 06:48:24.555050   48507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:48:24.556419   48507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2359/.minikube/bin
	I0915 06:48:24.556929   48507 out.go:352] Setting JSON to false
	I0915 06:48:24.558031   48507 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1856,"bootTime":1726381048,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0915 06:48:24.558120   48507 start.go:139] virtualization:  
	I0915 06:48:24.561460   48507 out.go:177] * [functional-175112] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0915 06:48:24.563517   48507 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:48:24.564927   48507 notify.go:220] Checking for updates...
	I0915 06:48:24.570905   48507 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:48:24.572814   48507 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-2359/kubeconfig
	I0915 06:48:24.574819   48507 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2359/.minikube
	I0915 06:48:24.576872   48507 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0915 06:48:24.578476   48507 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:48:24.582047   48507 config.go:182] Loaded profile config "functional-175112": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:48:24.582594   48507 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:48:24.627300   48507 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:48:24.627417   48507 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:48:24.714920   48507 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-15 06:48:24.701159549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:48:24.715084   48507 docker.go:318] overlay module found
	I0915 06:48:24.717860   48507 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0915 06:48:24.719747   48507 start.go:297] selected driver: docker
	I0915 06:48:24.719763   48507 start.go:901] validating driver "docker" against &{Name:functional-175112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-175112 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:48:24.719860   48507 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:48:24.722296   48507 out.go:201] 
	W0915 06:48:24.724185   48507 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0915 06:48:24.726097   48507 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-175112 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-175112 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-lnvrh" [49ac8127-3df6-417a-86f3-f10c643f30de] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-lnvrh" [49ac8127-3df6-417a-86f3-f10c643f30de] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.00363611s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30987
functional_test.go:1675: http://192.168.49.2:30987: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-lnvrh

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30987
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f8cf6860-898f-4156-8430-8c83e9827e20] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004201563s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-175112 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-175112 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-175112 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-175112 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a26fdf48-9dfc-44ea-b277-a3d9418e952a] Pending
helpers_test.go:344: "sp-pod" [a26fdf48-9dfc-44ea-b277-a3d9418e952a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a26fdf48-9dfc-44ea-b277-a3d9418e952a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004251827s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-175112 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-175112 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-175112 delete -f testdata/storage-provisioner/pod.yaml: (1.190080616s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-175112 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b354a0f3-33c6-4857-825a-9998f064831e] Pending
helpers_test.go:344: "sp-pod" [b354a0f3-33c6-4857-825a-9998f064831e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b354a0f3-33c6-4857-825a-9998f064831e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004208425s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-175112 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.22s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh -n functional-175112 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 cp functional-175112:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2473684512/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh -n functional-175112 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh -n functional-175112 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7668/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "sudo cat /etc/test/nested/copy/7668/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7668.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "sudo cat /etc/ssl/certs/7668.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7668.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "sudo cat /usr/share/ca-certificates/7668.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/76682.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "sudo cat /etc/ssl/certs/76682.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/76682.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "sudo cat /usr/share/ca-certificates/76682.pem"
E0915 06:48:39.557912    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:48:39.565766    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:48:39.577365    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
E0915 06:48:39.726939    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/CertSync (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-175112 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-175112 ssh "sudo systemctl is-active crio": exit status 1 (285.172666ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-175112 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-175112 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-175112 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 45877: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-175112 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-175112 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-175112 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [81cbedeb-0fd9-4ba2-9d54-174ca774a030] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [81cbedeb-0fd9-4ba2-9d54-174ca774a030] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003880532s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-175112 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.177.23 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-175112 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-175112 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-175112 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-mx8w7" [42c2abca-abee-4a79-b7b2-a8110debcb22] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-mx8w7" [42c2abca-abee-4a79-b7b2-a8110debcb22] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004664322s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 service list -o json
functional_test.go:1494: Took "508.043786ms" to run "out/minikube-linux-arm64 -p functional-175112 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30183
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "485.16527ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "70.360023ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30183
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "417.849249ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "122.196478ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-175112 /tmp/TestFunctionalparallelMountCmdany-port1129381311/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726382903190263573" to /tmp/TestFunctionalparallelMountCmdany-port1129381311/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726382903190263573" to /tmp/TestFunctionalparallelMountCmdany-port1129381311/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726382903190263573" to /tmp/TestFunctionalparallelMountCmdany-port1129381311/001/test-1726382903190263573
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-175112 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (486.90378ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 15 06:48 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 15 06:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 15 06:48 test-1726382903190263573
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh cat /mount-9p/test-1726382903190263573
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-175112 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3dd364f4-ce79-411c-a00a-77da53897c3f] Pending
helpers_test.go:344: "busybox-mount" [3dd364f4-ce79-411c-a00a-77da53897c3f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3dd364f4-ce79-411c-a00a-77da53897c3f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3dd364f4-ce79-411c-a00a-77da53897c3f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004318428s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-175112 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-175112 /tmp/TestFunctionalparallelMountCmdany-port1129381311/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-175112 /tmp/TestFunctionalparallelMountCmdspecific-port2623933077/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-175112 /tmp/TestFunctionalparallelMountCmdspecific-port2623933077/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-175112 ssh "sudo umount -f /mount-9p": exit status 1 (325.410797ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-175112 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-175112 /tmp/TestFunctionalparallelMountCmdspecific-port2623933077/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-175112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1295956905/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-175112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1295956905/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-175112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1295956905/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-175112 ssh "findmnt -T" /mount1: (1.158972462s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-175112 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-175112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1295956905/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-175112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1295956905/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-175112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1295956905/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 version -o=json --components
E0915 06:48:40.213584    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-175112 version -o=json --components: (1.138908562s)
--- PASS: TestFunctional/parallel/Version/components (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-175112 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-175112
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-175112
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-175112 image ls --format short --alsologtostderr:
I0915 06:48:42.099512   51811 out.go:345] Setting OutFile to fd 1 ...
I0915 06:48:42.099743   51811 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:48:42.099759   51811 out.go:358] Setting ErrFile to fd 2...
I0915 06:48:42.099765   51811 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:48:42.100159   51811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2359/.minikube/bin
I0915 06:48:42.100985   51811 config.go:182] Loaded profile config "functional-175112": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:48:42.104207   51811 config.go:182] Loaded profile config "functional-175112": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:48:42.104847   51811 cli_runner.go:164] Run: docker container inspect functional-175112 --format={{.State.Status}}
I0915 06:48:42.148047   51811 ssh_runner.go:195] Run: systemctl --version
I0915 06:48:42.148101   51811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-175112
I0915 06:48:42.194948   51811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/functional-175112/id_rsa Username:docker}
I0915 06:48:42.299452   51811 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-175112 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/minikube-local-cache-test | functional-175112 | ae747db782928 | 30B    |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kicbase/echo-server               | functional-175112 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-175112 image ls --format table --alsologtostderr:
I0915 06:48:42.446235   51879 out.go:345] Setting OutFile to fd 1 ...
I0915 06:48:42.446430   51879 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:48:42.446443   51879 out.go:358] Setting ErrFile to fd 2...
I0915 06:48:42.446449   51879 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:48:42.446738   51879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2359/.minikube/bin
I0915 06:48:42.447529   51879 config.go:182] Loaded profile config "functional-175112": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:48:42.447715   51879 config.go:182] Loaded profile config "functional-175112": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:48:42.448371   51879 cli_runner.go:164] Run: docker container inspect functional-175112 --format={{.State.Status}}
I0915 06:48:42.480298   51879 ssh_runner.go:195] Run: systemctl --version
I0915 06:48:42.480352   51879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-175112
I0915 06:48:42.512064   51879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/functional-175112/id_rsa Username:docker}
I0915 06:48:42.615534   51879 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-175112 image ls --format json --alsologtostderr:
[{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"ae747db7829287dca27ea4a3db0a6c62160539798753b37b3dbb9b1cf3d1b0ea","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-175112"],"size":"30"},{"id":"279f381cb37365bbbcd133c9531
fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":
[],"repoTags":["docker.io/kicbase/echo-server:functional-175112"],"size":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"si
ze":"29000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-175112 image ls --format json --alsologtostderr:
I0915 06:48:42.406631   51873 out.go:345] Setting OutFile to fd 1 ...
I0915 06:48:42.406767   51873 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:48:42.406790   51873 out.go:358] Setting ErrFile to fd 2...
I0915 06:48:42.406797   51873 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:48:42.407073   51873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2359/.minikube/bin
I0915 06:48:42.407751   51873 config.go:182] Loaded profile config "functional-175112": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:48:42.407917   51873 config.go:182] Loaded profile config "functional-175112": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:48:42.408521   51873 cli_runner.go:164] Run: docker container inspect functional-175112 --format={{.State.Status}}
I0915 06:48:42.450310   51873 ssh_runner.go:195] Run: systemctl --version
I0915 06:48:42.450356   51873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-175112
I0915 06:48:42.476181   51873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/functional-175112/id_rsa Username:docker}
I0915 06:48:42.578413   51873 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image ls --format yaml --alsologtostderr
E0915 06:48:42.145165    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-175112 image ls --format yaml --alsologtostderr:
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: ae747db7829287dca27ea4a3db0a6c62160539798753b37b3dbb9b1cf3d1b0ea
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-175112
size: "30"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-175112
size: "4780000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-175112 image ls --format yaml --alsologtostderr:
I0915 06:48:42.098283   51812 out.go:345] Setting OutFile to fd 1 ...
I0915 06:48:42.098550   51812 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:48:42.098584   51812 out.go:358] Setting ErrFile to fd 2...
I0915 06:48:42.098604   51812 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:48:42.098923   51812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2359/.minikube/bin
I0915 06:48:42.099880   51812 config.go:182] Loaded profile config "functional-175112": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:48:42.100088   51812 config.go:182] Loaded profile config "functional-175112": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:48:42.100919   51812 cli_runner.go:164] Run: docker container inspect functional-175112 --format={{.State.Status}}
I0915 06:48:42.158878   51812 ssh_runner.go:195] Run: systemctl --version
I0915 06:48:42.158940   51812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-175112
I0915 06:48:42.204914   51812 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/functional-175112/id_rsa Username:docker}
I0915 06:48:42.322396   51812 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-175112 ssh pgrep buildkitd: exit status 1 (290.076066ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image build -t localhost/my-image:functional-175112 testdata/build --alsologtostderr
E0915 06:48:44.707211    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-175112 image build -t localhost/my-image:functional-175112 testdata/build --alsologtostderr: (2.589043094s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-175112 image build -t localhost/my-image:functional-175112 testdata/build --alsologtostderr:
I0915 06:48:42.949366   52010 out.go:345] Setting OutFile to fd 1 ...
I0915 06:48:42.949869   52010 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:48:42.949915   52010 out.go:358] Setting ErrFile to fd 2...
I0915 06:48:42.949934   52010 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:48:42.950321   52010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2359/.minikube/bin
I0915 06:48:42.951030   52010 config.go:182] Loaded profile config "functional-175112": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:48:42.952285   52010 config.go:182] Loaded profile config "functional-175112": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:48:42.952857   52010 cli_runner.go:164] Run: docker container inspect functional-175112 --format={{.State.Status}}
I0915 06:48:42.969892   52010 ssh_runner.go:195] Run: systemctl --version
I0915 06:48:42.969948   52010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-175112
I0915 06:48:42.986516   52010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/functional-175112/id_rsa Username:docker}
I0915 06:48:43.086862   52010 build_images.go:161] Building image from path: /tmp/build.2220785556.tar
I0915 06:48:43.086931   52010 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0915 06:48:43.096564   52010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2220785556.tar
I0915 06:48:43.099981   52010 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2220785556.tar: stat -c "%s %y" /var/lib/minikube/build/build.2220785556.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2220785556.tar': No such file or directory
I0915 06:48:43.100010   52010 ssh_runner.go:362] scp /tmp/build.2220785556.tar --> /var/lib/minikube/build/build.2220785556.tar (3072 bytes)
I0915 06:48:43.125394   52010 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2220785556
I0915 06:48:43.134700   52010 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2220785556 -xf /var/lib/minikube/build/build.2220785556.tar
I0915 06:48:43.143714   52010 docker.go:360] Building image: /var/lib/minikube/build/build.2220785556
I0915 06:48:43.143790   52010 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-175112 /var/lib/minikube/build/build.2220785556
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:8c8b17c7926cc806d0b498f444ec0e4450d1d470999ec592ccad23c36d8db280 done
#8 naming to localhost/my-image:functional-175112 done
#8 DONE 0.1s
I0915 06:48:45.456025   52010 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-175112 /var/lib/minikube/build/build.2220785556: (2.312207992s)
I0915 06:48:45.456102   52010 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2220785556
I0915 06:48:45.469053   52010 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2220785556.tar
I0915 06:48:45.479070   52010 build_images.go:217] Built localhost/my-image:functional-175112 from /tmp/build.2220785556.tar
I0915 06:48:45.479102   52010 build_images.go:133] succeeded building to: functional-175112
I0915 06:48:45.479108   52010 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
2024/09/15 06:48:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-175112
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-175112 docker-env) && out/minikube-linux-arm64 status -p functional-175112"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-175112 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image load --daemon kicbase/echo-server:functional-175112 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image load --daemon kicbase/echo-server:functional-175112 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-175112
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image load --daemon kicbase/echo-server:functional-175112 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image ls
E0915 06:48:39.602279    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:48:39.644909    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image save kicbase/echo-server:functional-175112 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
E0915 06:48:39.888155    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image rm kicbase/echo-server:functional-175112 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
E0915 06:48:40.855341    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-175112
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-175112 image save --daemon kicbase/echo-server:functional-175112 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-175112
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-175112
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-175112
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-175112
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (123.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-511945 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0915 06:48:49.829374    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:49:00.072881    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:49:20.569128    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:50:01.530550    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-511945 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m2.567534851s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (123.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (10.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-511945 -- rollout status deployment/busybox: (7.526249421s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- exec busybox-7dff88458-lp92n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- exec busybox-7dff88458-nlxq5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- exec busybox-7dff88458-qxq47 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- exec busybox-7dff88458-lp92n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- exec busybox-7dff88458-nlxq5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- exec busybox-7dff88458-qxq47 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- exec busybox-7dff88458-lp92n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- exec busybox-7dff88458-nlxq5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- exec busybox-7dff88458-qxq47 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (10.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- exec busybox-7dff88458-lp92n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- exec busybox-7dff88458-lp92n -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- exec busybox-7dff88458-nlxq5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- exec busybox-7dff88458-nlxq5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- exec busybox-7dff88458-qxq47 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-511945 -- exec busybox-7dff88458-qxq47 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-511945 -v=7 --alsologtostderr
E0915 06:51:23.452243    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-511945 -v=7 --alsologtostderr: (22.730746819s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-511945 status -v=7 --alsologtostderr: (1.101588968s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-511945 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-511945 status --output json -v=7 --alsologtostderr: (1.111546141s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp testdata/cp-test.txt ha-511945:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp ha-511945:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4209019869/001/cp-test_ha-511945.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp ha-511945:/home/docker/cp-test.txt ha-511945-m02:/home/docker/cp-test_ha-511945_ha-511945-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m02 "sudo cat /home/docker/cp-test_ha-511945_ha-511945-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp ha-511945:/home/docker/cp-test.txt ha-511945-m03:/home/docker/cp-test_ha-511945_ha-511945-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m03 "sudo cat /home/docker/cp-test_ha-511945_ha-511945-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp ha-511945:/home/docker/cp-test.txt ha-511945-m04:/home/docker/cp-test_ha-511945_ha-511945-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m04 "sudo cat /home/docker/cp-test_ha-511945_ha-511945-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp testdata/cp-test.txt ha-511945-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp ha-511945-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4209019869/001/cp-test_ha-511945-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp ha-511945-m02:/home/docker/cp-test.txt ha-511945:/home/docker/cp-test_ha-511945-m02_ha-511945.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945 "sudo cat /home/docker/cp-test_ha-511945-m02_ha-511945.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp ha-511945-m02:/home/docker/cp-test.txt ha-511945-m03:/home/docker/cp-test_ha-511945-m02_ha-511945-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m03 "sudo cat /home/docker/cp-test_ha-511945-m02_ha-511945-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp ha-511945-m02:/home/docker/cp-test.txt ha-511945-m04:/home/docker/cp-test_ha-511945-m02_ha-511945-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m04 "sudo cat /home/docker/cp-test_ha-511945-m02_ha-511945-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp testdata/cp-test.txt ha-511945-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp ha-511945-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4209019869/001/cp-test_ha-511945-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp ha-511945-m03:/home/docker/cp-test.txt ha-511945:/home/docker/cp-test_ha-511945-m03_ha-511945.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945 "sudo cat /home/docker/cp-test_ha-511945-m03_ha-511945.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp ha-511945-m03:/home/docker/cp-test.txt ha-511945-m02:/home/docker/cp-test_ha-511945-m03_ha-511945-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m02 "sudo cat /home/docker/cp-test_ha-511945-m03_ha-511945-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp ha-511945-m03:/home/docker/cp-test.txt ha-511945-m04:/home/docker/cp-test_ha-511945-m03_ha-511945-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m04 "sudo cat /home/docker/cp-test_ha-511945-m03_ha-511945-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp testdata/cp-test.txt ha-511945-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp ha-511945-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4209019869/001/cp-test_ha-511945-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp ha-511945-m04:/home/docker/cp-test.txt ha-511945:/home/docker/cp-test_ha-511945-m04_ha-511945.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945 "sudo cat /home/docker/cp-test_ha-511945-m04_ha-511945.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp ha-511945-m04:/home/docker/cp-test.txt ha-511945-m02:/home/docker/cp-test_ha-511945-m04_ha-511945-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m02 "sudo cat /home/docker/cp-test_ha-511945-m04_ha-511945-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 cp ha-511945-m04:/home/docker/cp-test.txt ha-511945-m03:/home/docker/cp-test_ha-511945-m04_ha-511945-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 ssh -n ha-511945-m03 "sudo cat /home/docker/cp-test_ha-511945-m04_ha-511945-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-511945 node stop m02 -v=7 --alsologtostderr: (11.34504998s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-511945 status -v=7 --alsologtostderr: exit status 7 (924.614144ms)

                                                
                                                
-- stdout --
	ha-511945
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511945-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-511945-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-511945-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 06:52:00.560133   74160 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:52:00.561436   74160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:52:00.561505   74160 out.go:358] Setting ErrFile to fd 2...
	I0915 06:52:00.561518   74160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:52:00.561894   74160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2359/.minikube/bin
	I0915 06:52:00.562167   74160 out.go:352] Setting JSON to false
	I0915 06:52:00.562231   74160 mustload.go:65] Loading cluster: ha-511945
	I0915 06:52:00.562305   74160 notify.go:220] Checking for updates...
	I0915 06:52:00.563868   74160 config.go:182] Loaded profile config "ha-511945": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:52:00.563899   74160 status.go:255] checking status of ha-511945 ...
	I0915 06:52:00.564734   74160 cli_runner.go:164] Run: docker container inspect ha-511945 --format={{.State.Status}}
	I0915 06:52:00.594156   74160 status.go:330] ha-511945 host status = "Running" (err=<nil>)
	I0915 06:52:00.594190   74160 host.go:66] Checking if "ha-511945" exists ...
	I0915 06:52:00.594705   74160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-511945
	I0915 06:52:00.641764   74160 host.go:66] Checking if "ha-511945" exists ...
	I0915 06:52:00.642220   74160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 06:52:00.642289   74160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-511945
	I0915 06:52:00.675448   74160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32784 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/ha-511945/id_rsa Username:docker}
	I0915 06:52:00.771578   74160 ssh_runner.go:195] Run: systemctl --version
	I0915 06:52:00.776477   74160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:52:00.791687   74160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:52:00.850457   74160 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-15 06:52:00.840801033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:52:00.851062   74160 kubeconfig.go:125] found "ha-511945" server: "https://192.168.49.254:8443"
	I0915 06:52:00.851096   74160 api_server.go:166] Checking apiserver status ...
	I0915 06:52:00.851139   74160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:52:00.863883   74160 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2335/cgroup
	I0915 06:52:00.874694   74160 api_server.go:182] apiserver freezer: "12:freezer:/docker/407cd3e446f0e156bc7c66a58f2a32b610be4693bda19b6e957f4b1c47e97b54/kubepods/burstable/pod06a0dd1b4cf7625930b86fee29f391e7/8fcca72264cf0dfde4e41c263759c363593ebf44fdc84724934c1c896ecd8969"
	I0915 06:52:00.874777   74160 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/407cd3e446f0e156bc7c66a58f2a32b610be4693bda19b6e957f4b1c47e97b54/kubepods/burstable/pod06a0dd1b4cf7625930b86fee29f391e7/8fcca72264cf0dfde4e41c263759c363593ebf44fdc84724934c1c896ecd8969/freezer.state
	I0915 06:52:00.885406   74160 api_server.go:204] freezer state: "THAWED"
	I0915 06:52:00.885438   74160 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0915 06:52:00.894212   74160 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0915 06:52:00.894241   74160 status.go:422] ha-511945 apiserver status = Running (err=<nil>)
	I0915 06:52:00.894252   74160 status.go:257] ha-511945 status: &{Name:ha-511945 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 06:52:00.894276   74160 status.go:255] checking status of ha-511945-m02 ...
	I0915 06:52:00.894618   74160 cli_runner.go:164] Run: docker container inspect ha-511945-m02 --format={{.State.Status}}
	I0915 06:52:00.911471   74160 status.go:330] ha-511945-m02 host status = "Stopped" (err=<nil>)
	I0915 06:52:00.911496   74160 status.go:343] host is not running, skipping remaining checks
	I0915 06:52:00.911504   74160 status.go:257] ha-511945-m02 status: &{Name:ha-511945-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 06:52:00.911524   74160 status.go:255] checking status of ha-511945-m03 ...
	I0915 06:52:00.911827   74160 cli_runner.go:164] Run: docker container inspect ha-511945-m03 --format={{.State.Status}}
	I0915 06:52:00.935438   74160 status.go:330] ha-511945-m03 host status = "Running" (err=<nil>)
	I0915 06:52:00.935465   74160 host.go:66] Checking if "ha-511945-m03" exists ...
	I0915 06:52:00.935793   74160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-511945-m03
	I0915 06:52:00.956218   74160 host.go:66] Checking if "ha-511945-m03" exists ...
	I0915 06:52:00.956672   74160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 06:52:00.956731   74160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-511945-m03
	I0915 06:52:00.975947   74160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32794 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/ha-511945-m03/id_rsa Username:docker}
	I0915 06:52:01.079431   74160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:52:01.093140   74160 kubeconfig.go:125] found "ha-511945" server: "https://192.168.49.254:8443"
	I0915 06:52:01.093184   74160 api_server.go:166] Checking apiserver status ...
	I0915 06:52:01.093224   74160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:52:01.105823   74160 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2169/cgroup
	I0915 06:52:01.123005   74160 api_server.go:182] apiserver freezer: "12:freezer:/docker/d07d4e0ba7c26f17dca9fb79bb7c64d22e5d9bff370034aa4404daa5c951800a/kubepods/burstable/pod903e27971d18bebba2b514d1f08797d8/572eae1b86994174f49849b2e46f263b9c09464845519c20bb247156b7b61910"
	I0915 06:52:01.123209   74160 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d07d4e0ba7c26f17dca9fb79bb7c64d22e5d9bff370034aa4404daa5c951800a/kubepods/burstable/pod903e27971d18bebba2b514d1f08797d8/572eae1b86994174f49849b2e46f263b9c09464845519c20bb247156b7b61910/freezer.state
	I0915 06:52:01.137369   74160 api_server.go:204] freezer state: "THAWED"
	I0915 06:52:01.137407   74160 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0915 06:52:01.145573   74160 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0915 06:52:01.145615   74160 status.go:422] ha-511945-m03 apiserver status = Running (err=<nil>)
	I0915 06:52:01.145625   74160 status.go:257] ha-511945-m03 status: &{Name:ha-511945-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 06:52:01.145643   74160 status.go:255] checking status of ha-511945-m04 ...
	I0915 06:52:01.145969   74160 cli_runner.go:164] Run: docker container inspect ha-511945-m04 --format={{.State.Status}}
	I0915 06:52:01.164545   74160 status.go:330] ha-511945-m04 host status = "Running" (err=<nil>)
	I0915 06:52:01.164570   74160 host.go:66] Checking if "ha-511945-m04" exists ...
	I0915 06:52:01.164893   74160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-511945-m04
	I0915 06:52:01.184273   74160 host.go:66] Checking if "ha-511945-m04" exists ...
	I0915 06:52:01.184648   74160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 06:52:01.184697   74160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-511945-m04
	I0915 06:52:01.203627   74160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32799 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/ha-511945-m04/id_rsa Username:docker}
	I0915 06:52:01.303226   74160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:52:01.315874   74160 status.go:257] ha-511945-m04 status: &{Name:ha-511945-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (64.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 node start m02 -v=7 --alsologtostderr
E0915 06:52:52.732459    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:52:52.738931    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:52:52.750287    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:52:52.771585    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:52:52.812912    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:52:52.894363    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:52:53.056384    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:52:53.378231    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:52:54.019971    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:52:55.301391    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:52:57.863197    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:53:02.984976    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-511945 node start m02 -v=7 --alsologtostderr: (1m3.072565418s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-511945 status -v=7 --alsologtostderr: (1.024367842s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (64.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (256.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-511945 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-511945 -v=7 --alsologtostderr
E0915 06:53:13.226896    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:53:33.708194    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:53:39.548646    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-511945 -v=7 --alsologtostderr: (34.291537547s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-511945 --wait=true -v=7 --alsologtostderr
E0915 06:54:07.295153    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:54:14.669955    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:55:36.591880    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-511945 --wait=true -v=7 --alsologtostderr: (3m41.931230658s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-511945
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (256.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-511945 node delete m03 -v=7 --alsologtostderr: (10.607931209s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 stop -v=7 --alsologtostderr
E0915 06:57:52.732829    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-511945 stop -v=7 --alsologtostderr: (32.663792866s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-511945 status -v=7 --alsologtostderr: exit status 7 (112.654715ms)

                                                
                                                
-- stdout --
	ha-511945
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-511945-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-511945-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 06:58:08.102051  102566 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:58:08.102199  102566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:58:08.102224  102566 out.go:358] Setting ErrFile to fd 2...
	I0915 06:58:08.102240  102566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:58:08.102504  102566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2359/.minikube/bin
	I0915 06:58:08.102718  102566 out.go:352] Setting JSON to false
	I0915 06:58:08.102758  102566 mustload.go:65] Loading cluster: ha-511945
	I0915 06:58:08.102848  102566 notify.go:220] Checking for updates...
	I0915 06:58:08.103247  102566 config.go:182] Loaded profile config "ha-511945": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:58:08.103267  102566 status.go:255] checking status of ha-511945 ...
	I0915 06:58:08.103880  102566 cli_runner.go:164] Run: docker container inspect ha-511945 --format={{.State.Status}}
	I0915 06:58:08.124222  102566 status.go:330] ha-511945 host status = "Stopped" (err=<nil>)
	I0915 06:58:08.124245  102566 status.go:343] host is not running, skipping remaining checks
	I0915 06:58:08.124254  102566 status.go:257] ha-511945 status: &{Name:ha-511945 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 06:58:08.124281  102566 status.go:255] checking status of ha-511945-m02 ...
	I0915 06:58:08.124623  102566 cli_runner.go:164] Run: docker container inspect ha-511945-m02 --format={{.State.Status}}
	I0915 06:58:08.146407  102566 status.go:330] ha-511945-m02 host status = "Stopped" (err=<nil>)
	I0915 06:58:08.146433  102566 status.go:343] host is not running, skipping remaining checks
	I0915 06:58:08.146441  102566 status.go:257] ha-511945-m02 status: &{Name:ha-511945-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 06:58:08.146461  102566 status.go:255] checking status of ha-511945-m04 ...
	I0915 06:58:08.146753  102566 cli_runner.go:164] Run: docker container inspect ha-511945-m04 --format={{.State.Status}}
	I0915 06:58:08.168252  102566 status.go:330] ha-511945-m04 host status = "Stopped" (err=<nil>)
	I0915 06:58:08.168277  102566 status.go:343] host is not running, skipping remaining checks
	I0915 06:58:08.168285  102566 status.go:257] ha-511945-m04 status: &{Name:ha-511945-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (96.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-511945 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0915 06:58:20.434061    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:58:39.549301    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-511945 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m35.402376086s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (96.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (47.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-511945 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-511945 --control-plane -v=7 --alsologtostderr: (46.527568659s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-511945 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-511945 status -v=7 --alsologtostderr: (1.060255673s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (47.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-932918 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-932918 --driver=docker  --container-runtime=docker: (31.915343822s)
--- PASS: TestImageBuild/serial/Setup (31.92s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-932918
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-932918: (1.779122618s)
--- PASS: TestImageBuild/serial/NormalBuild (1.78s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-932918
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-932918
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.85s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-932918
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.81s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-842099 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-842099 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m14.890859529s)
--- PASS: TestJSONOutput/start/Command (74.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-842099 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-842099 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-842099 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-842099 --output=json --user=testUser: (5.853251795s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-502632 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-502632 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.381903ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"05be29cc-1860-4613-bef9-b58bec7923a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-502632] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ce3ff9a-ae0d-40ce-90b1-854061662a32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19644"}}
	{"specversion":"1.0","id":"3aa77f39-6fc8-4ac0-9a22-9e0b6d3f60eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"42df8452-dd47-42ed-b408-78e616958731","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19644-2359/kubeconfig"}}
	{"specversion":"1.0","id":"95bfd1f5-c8d3-4065-b864-3b0e98eb1731","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2359/.minikube"}}
	{"specversion":"1.0","id":"0a5c46c3-c7cc-4284-b71b-fee82291607b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7066e31b-baa2-47d7-ab27-5f33f6cdc633","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f784377f-e737-4ae9-8851-5666a714e79e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-502632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-502632
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-608000 --network=
E0915 07:02:52.733069    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-608000 --network=: (31.378043444s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-608000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-608000
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-608000: (2.059878624s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.46s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.36s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-490734 --network=bridge
E0915 07:03:39.548657    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-490734 --network=bridge: (30.337114795s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-490734" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-490734
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-490734: (1.995995375s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.36s)

                                                
                                    
x
+
TestKicExistingNetwork (31.51s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-176637 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-176637 --network=existing-network: (29.329110528s)
helpers_test.go:175: Cleaning up "existing-network-176637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-176637
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-176637: (2.014181319s)
--- PASS: TestKicExistingNetwork (31.51s)

                                                
                                    
x
+
TestKicCustomSubnet (33.46s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-587499 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-587499 --subnet=192.168.60.0/24: (31.312905306s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-587499 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-587499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-587499
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-587499: (2.122319183s)
--- PASS: TestKicCustomSubnet (33.46s)

                                                
                                    
x
+
TestKicStaticIP (32.79s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-289244 --static-ip=192.168.200.200
E0915 07:05:02.658105    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-289244 --static-ip=192.168.200.200: (30.541937084s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-289244 ip
helpers_test.go:175: Cleaning up "static-ip-289244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-289244
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-289244: (2.100867314s)
--- PASS: TestKicStaticIP (32.79s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.51s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-324263 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-324263 --driver=docker  --container-runtime=docker: (31.615100182s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-327036 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-327036 --driver=docker  --container-runtime=docker: (34.265198152s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-324263
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-327036
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-327036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-327036
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-327036: (2.205261446s)
helpers_test.go:175: Cleaning up "first-324263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-324263
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-324263: (2.138663223s)
--- PASS: TestMinikubeProfile (71.51s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-461436 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-461436 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.933212506s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-461436 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-463561 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-463561 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.315032926s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-463561 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-461436 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-461436 --alsologtostderr -v=5: (1.456470309s)
--- PASS: TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-463561 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-463561
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-463561: (1.219001651s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.21s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-463561
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-463561: (7.209112898s)
--- PASS: TestMountStart/serial/RestartStopped (8.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-463561 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (73.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-417690 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0915 07:07:52.732857    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-417690 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m12.831920205s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (73.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (43.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-417690 -- rollout status deployment/busybox: (3.281547806s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0915 07:08:39.549493    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- exec busybox-7dff88458-45grx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- exec busybox-7dff88458-l8fhn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- exec busybox-7dff88458-45grx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- exec busybox-7dff88458-l8fhn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- exec busybox-7dff88458-45grx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- exec busybox-7dff88458-l8fhn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (43.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- exec busybox-7dff88458-45grx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- exec busybox-7dff88458-45grx -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- exec busybox-7dff88458-l8fhn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-417690 -- exec busybox-7dff88458-l8fhn -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-417690 -v 3 --alsologtostderr
E0915 07:09:15.795429    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-417690 -v 3 --alsologtostderr: (16.912753483s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.69s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-417690 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 cp testdata/cp-test.txt multinode-417690:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 cp multinode-417690:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile961429444/001/cp-test_multinode-417690.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 cp multinode-417690:/home/docker/cp-test.txt multinode-417690-m02:/home/docker/cp-test_multinode-417690_multinode-417690-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690-m02 "sudo cat /home/docker/cp-test_multinode-417690_multinode-417690-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 cp multinode-417690:/home/docker/cp-test.txt multinode-417690-m03:/home/docker/cp-test_multinode-417690_multinode-417690-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690-m03 "sudo cat /home/docker/cp-test_multinode-417690_multinode-417690-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 cp testdata/cp-test.txt multinode-417690-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 cp multinode-417690-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile961429444/001/cp-test_multinode-417690-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 cp multinode-417690-m02:/home/docker/cp-test.txt multinode-417690:/home/docker/cp-test_multinode-417690-m02_multinode-417690.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690 "sudo cat /home/docker/cp-test_multinode-417690-m02_multinode-417690.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 cp multinode-417690-m02:/home/docker/cp-test.txt multinode-417690-m03:/home/docker/cp-test_multinode-417690-m02_multinode-417690-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690-m03 "sudo cat /home/docker/cp-test_multinode-417690-m02_multinode-417690-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 cp testdata/cp-test.txt multinode-417690-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 cp multinode-417690-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile961429444/001/cp-test_multinode-417690-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 cp multinode-417690-m03:/home/docker/cp-test.txt multinode-417690:/home/docker/cp-test_multinode-417690-m03_multinode-417690.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690 "sudo cat /home/docker/cp-test_multinode-417690-m03_multinode-417690.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 cp multinode-417690-m03:/home/docker/cp-test.txt multinode-417690-m02:/home/docker/cp-test_multinode-417690-m03_multinode-417690-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 ssh -n multinode-417690-m02 "sudo cat /home/docker/cp-test_multinode-417690-m03_multinode-417690-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-417690 node stop m03: (1.216195919s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-417690 status: exit status 7 (515.561491ms)

                                                
                                                
-- stdout --
	multinode-417690
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-417690-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-417690-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-417690 status --alsologtostderr: exit status 7 (526.875276ms)

                                                
                                                
-- stdout --
	multinode-417690
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-417690-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-417690-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:09:38.534961  176341 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:09:38.535097  176341 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:09:38.535123  176341 out.go:358] Setting ErrFile to fd 2...
	I0915 07:09:38.535151  176341 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:09:38.535423  176341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2359/.minikube/bin
	I0915 07:09:38.535633  176341 out.go:352] Setting JSON to false
	I0915 07:09:38.535674  176341 mustload.go:65] Loading cluster: multinode-417690
	I0915 07:09:38.535783  176341 notify.go:220] Checking for updates...
	I0915 07:09:38.536152  176341 config.go:182] Loaded profile config "multinode-417690": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 07:09:38.536174  176341 status.go:255] checking status of multinode-417690 ...
	I0915 07:09:38.536744  176341 cli_runner.go:164] Run: docker container inspect multinode-417690 --format={{.State.Status}}
	I0915 07:09:38.556963  176341 status.go:330] multinode-417690 host status = "Running" (err=<nil>)
	I0915 07:09:38.556989  176341 host.go:66] Checking if "multinode-417690" exists ...
	I0915 07:09:38.557316  176341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-417690
	I0915 07:09:38.583023  176341 host.go:66] Checking if "multinode-417690" exists ...
	I0915 07:09:38.583331  176341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:09:38.583392  176341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-417690
	I0915 07:09:38.607659  176341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/multinode-417690/id_rsa Username:docker}
	I0915 07:09:38.703114  176341 ssh_runner.go:195] Run: systemctl --version
	I0915 07:09:38.707313  176341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:09:38.719097  176341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 07:09:38.781204  176341 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-15 07:09:38.76831717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 07:09:38.781850  176341 kubeconfig.go:125] found "multinode-417690" server: "https://192.168.67.2:8443"
	I0915 07:09:38.781885  176341 api_server.go:166] Checking apiserver status ...
	I0915 07:09:38.781934  176341 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:09:38.794355  176341 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2272/cgroup
	I0915 07:09:38.803926  176341 api_server.go:182] apiserver freezer: "12:freezer:/docker/333febe09de2b21e5cb3c96c4f986c61e2de33ffc642dd98ec941671df04ab6e/kubepods/burstable/pod30b4ce2198909fc91512e50bb34aaf1e/7aa3c45eca23683d82f2937cdd5f3f08d3faa81bdd870325d6f4c0bf50e5bea1"
	I0915 07:09:38.804000  176341 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/333febe09de2b21e5cb3c96c4f986c61e2de33ffc642dd98ec941671df04ab6e/kubepods/burstable/pod30b4ce2198909fc91512e50bb34aaf1e/7aa3c45eca23683d82f2937cdd5f3f08d3faa81bdd870325d6f4c0bf50e5bea1/freezer.state
	I0915 07:09:38.812527  176341 api_server.go:204] freezer state: "THAWED"
	I0915 07:09:38.812558  176341 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0915 07:09:38.820485  176341 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0915 07:09:38.820514  176341 status.go:422] multinode-417690 apiserver status = Running (err=<nil>)
	I0915 07:09:38.820525  176341 status.go:257] multinode-417690 status: &{Name:multinode-417690 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:09:38.820546  176341 status.go:255] checking status of multinode-417690-m02 ...
	I0915 07:09:38.820861  176341 cli_runner.go:164] Run: docker container inspect multinode-417690-m02 --format={{.State.Status}}
	I0915 07:09:38.838590  176341 status.go:330] multinode-417690-m02 host status = "Running" (err=<nil>)
	I0915 07:09:38.838617  176341 host.go:66] Checking if "multinode-417690-m02" exists ...
	I0915 07:09:38.838916  176341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-417690-m02
	I0915 07:09:38.855538  176341 host.go:66] Checking if "multinode-417690-m02" exists ...
	I0915 07:09:38.855847  176341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:09:38.855902  176341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-417690-m02
	I0915 07:09:38.872425  176341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/19644-2359/.minikube/machines/multinode-417690-m02/id_rsa Username:docker}
	I0915 07:09:38.971136  176341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:09:38.982989  176341 status.go:257] multinode-417690-m02 status: &{Name:multinode-417690-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:09:38.983025  176341 status.go:255] checking status of multinode-417690-m03 ...
	I0915 07:09:38.983425  176341 cli_runner.go:164] Run: docker container inspect multinode-417690-m03 --format={{.State.Status}}
	I0915 07:09:38.999938  176341 status.go:330] multinode-417690-m03 host status = "Stopped" (err=<nil>)
	I0915 07:09:38.999982  176341 status.go:343] host is not running, skipping remaining checks
	I0915 07:09:38.999990  176341 status.go:257] multinode-417690-m03 status: &{Name:multinode-417690-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-417690 node start m03 -v=7 --alsologtostderr: (10.706587427s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (98.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-417690
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-417690
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-417690: (22.620118333s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-417690 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-417690 --wait=true -v=8 --alsologtostderr: (1m16.22778604s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-417690
--- PASS: TestMultiNode/serial/RestartKeepsNodes (98.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-417690 node delete m03: (5.032248646s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-417690 stop: (21.356786154s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-417690 status: exit status 7 (101.759301ms)

                                                
                                                
-- stdout --
	multinode-417690
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-417690-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-417690 status --alsologtostderr: exit status 7 (82.266884ms)

                                                
                                                
-- stdout --
	multinode-417690
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-417690-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:11:56.694343  189860 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:11:56.694475  189860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:11:56.694485  189860 out.go:358] Setting ErrFile to fd 2...
	I0915 07:11:56.694491  189860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:11:56.694737  189860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2359/.minikube/bin
	I0915 07:11:56.694946  189860 out.go:352] Setting JSON to false
	I0915 07:11:56.694975  189860 mustload.go:65] Loading cluster: multinode-417690
	I0915 07:11:56.695021  189860 notify.go:220] Checking for updates...
	I0915 07:11:56.695410  189860 config.go:182] Loaded profile config "multinode-417690": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 07:11:56.695428  189860 status.go:255] checking status of multinode-417690 ...
	I0915 07:11:56.696003  189860 cli_runner.go:164] Run: docker container inspect multinode-417690 --format={{.State.Status}}
	I0915 07:11:56.714778  189860 status.go:330] multinode-417690 host status = "Stopped" (err=<nil>)
	I0915 07:11:56.714800  189860 status.go:343] host is not running, skipping remaining checks
	I0915 07:11:56.714808  189860 status.go:257] multinode-417690 status: &{Name:multinode-417690 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:11:56.714837  189860 status.go:255] checking status of multinode-417690-m02 ...
	I0915 07:11:56.715164  189860 cli_runner.go:164] Run: docker container inspect multinode-417690-m02 --format={{.State.Status}}
	I0915 07:11:56.732690  189860 status.go:330] multinode-417690-m02 host status = "Stopped" (err=<nil>)
	I0915 07:11:56.732712  189860 status.go:343] host is not running, skipping remaining checks
	I0915 07:11:56.732720  189860 status.go:257] multinode-417690-m02 status: &{Name:multinode-417690-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-417690 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-417690 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (50.370404108s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-417690 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.04s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-417690
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-417690-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-417690-m02 --driver=docker  --container-runtime=docker: exit status 14 (89.92051ms)

                                                
                                                
-- stdout --
	* [multinode-417690-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-2359/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2359/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-417690-m02' is duplicated with machine name 'multinode-417690-m02' in profile 'multinode-417690'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-417690-m03 --driver=docker  --container-runtime=docker
E0915 07:12:52.734143    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-417690-m03 --driver=docker  --container-runtime=docker: (32.228651013s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-417690
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-417690: exit status 80 (334.347262ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-417690 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-417690-m03 already exists in multinode-417690-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-417690-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-417690-m03: (2.054684532s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.77s)

                                                
                                    
x
+
TestPreload (114.49s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-712543 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0915 07:13:39.548696    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-712543 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m7.710827239s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-712543 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-712543 image pull gcr.io/k8s-minikube/busybox: (2.121929413s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-712543
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-712543: (10.81398025s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-712543 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-712543 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (31.463845078s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-712543 image list
helpers_test.go:175: Cleaning up "test-preload-712543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-712543
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-712543: (2.14318239s)
--- PASS: TestPreload (114.49s)

                                                
                                    
x
+
TestScheduledStopUnix (104.24s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-328475 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-328475 --memory=2048 --driver=docker  --container-runtime=docker: (30.930498708s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-328475 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-328475 -n scheduled-stop-328475
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-328475 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-328475 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-328475 -n scheduled-stop-328475
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-328475
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-328475 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-328475
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-328475: exit status 7 (73.98008ms)

                                                
                                                
-- stdout --
	scheduled-stop-328475
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-328475 -n scheduled-stop-328475
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-328475 -n scheduled-stop-328475: exit status 7 (77.202177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-328475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-328475
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-328475: (1.756026884s)
--- PASS: TestScheduledStopUnix (104.24s)

                                                
                                    
x
+
TestSkaffold (121.91s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe859842902 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-564371 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-564371 --memory=2600 --driver=docker  --container-runtime=docker: (35.034777985s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe859842902 run --minikube-profile skaffold-564371 --kube-context skaffold-564371 --status-check=true --port-forward=false --interactive=false
E0915 07:17:52.733022    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:18:39.549297    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe859842902 run --minikube-profile skaffold-564371 --kube-context skaffold-564371 --status-check=true --port-forward=false --interactive=false: (1m11.113773645s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-9c8d59886-szrpl" [703130c1-4c11-4ada-a0a6-21eb8eda80a7] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003553196s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7c9cb8b88f-cnjfc" [d741cc43-6a44-431d-9f6f-dc5a4c54a2dd] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004004446s
helpers_test.go:175: Cleaning up "skaffold-564371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-564371
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-564371: (2.879779497s)
--- PASS: TestSkaffold (121.91s)

                                                
                                    
x
+
TestInsufficientStorage (11.1s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-053377 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-053377 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.823830127s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"da47ad11-7ead-4546-aa17-d020def4c8f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-053377] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6262290f-2545-48e5-8378-8d197c7c721f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19644"}}
	{"specversion":"1.0","id":"feaba0ac-a551-4fc2-833d-b40d6accf3f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e0b870fc-9e22-4754-9859-a23fde0e9c9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19644-2359/kubeconfig"}}
	{"specversion":"1.0","id":"2a1ca87b-223e-436e-802b-5a681d76c79d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2359/.minikube"}}
	{"specversion":"1.0","id":"a67cadc6-ddfa-4a5f-8079-613720f3a07a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1304232f-3bf5-4e2a-9547-3caf9aab20bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"602924b4-b0d2-4c6e-94aa-e7b68e9812c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"6ffd3f56-24c8-438b-9b0f-82542812fc9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"029d9b7f-667b-4fc3-a8da-8d081abde92b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8e74bbf5-8ede-4ca4-a716-24f8610ebf1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"55602e16-84e7-465a-a556-4cd7e783002f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-053377\" primary control-plane node in \"insufficient-storage-053377\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"53c4902f-ec42-44cd-9717-7d051dcb6e92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726358845-19644 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3a2bd192-cea9-4465-aba5-fc8eee9cb83a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"626dd462-8423-40ed-8654-ab89defc5e51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-053377 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-053377 --output=json --layout=cluster: exit status 7 (295.714933ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-053377","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-053377","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 07:19:16.177929  224069 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-053377" does not appear in /home/jenkins/minikube-integration/19644-2359/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-053377 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-053377 --output=json --layout=cluster: exit status 7 (281.318095ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-053377","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-053377","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 07:19:16.461205  224130 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-053377" does not appear in /home/jenkins/minikube-integration/19644-2359/kubeconfig
	E0915 07:19:16.471405  224130 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/insufficient-storage-053377/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-053377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-053377
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-053377: (1.698563623s)
--- PASS: TestInsufficientStorage (11.10s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (87.87s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4294250791 start -p running-upgrade-658145 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0915 07:23:39.548779    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:23:53.170244    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:23:53.176603    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:23:53.187960    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:23:53.209279    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:23:53.250679    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:23:53.332240    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:23:53.493799    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:23:53.815506    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:23:54.457739    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:23:55.739278    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:23:58.301422    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:24:03.422744    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4294250791 start -p running-upgrade-658145 --memory=2200 --vm-driver=docker  --container-runtime=docker: (42.172571831s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-658145 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0915 07:24:13.664912    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-658145 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (42.57466898s)
helpers_test.go:175: Cleaning up "running-upgrade-658145" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-658145
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-658145: (2.49750203s)
--- PASS: TestRunningBinaryUpgrade (87.87s)

                                                
                                    
x
+
TestKubernetesUpgrade (125.23s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-021582 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0915 07:21:42.659436    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-021582 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (52.732860342s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-021582
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-021582: (1.320232002s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-021582 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-021582 status --format={{.Host}}: exit status 7 (103.388642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-021582 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0915 07:22:52.733012    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-021582 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.37463204s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-021582 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-021582 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-021582 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (132.353598ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-021582] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-2359/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2359/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-021582
	    minikube start -p kubernetes-upgrade-021582 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0215822 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-021582 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-021582 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-021582 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.676030386s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-021582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-021582
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-021582: (2.747191646s)
--- PASS: TestKubernetesUpgrade (125.23s)

                                                
                                    
x
+
TestMissingContainerUpgrade (170.35s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.300622763 start -p missing-upgrade-274130 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.300622763 start -p missing-upgrade-274130 --memory=2200 --driver=docker  --container-runtime=docker: (1m29.34117891s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-274130
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-274130: (10.36251613s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-274130
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-274130 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-274130 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m7.862725824s)
helpers_test.go:175: Cleaning up "missing-upgrade-274130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-274130
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-274130: (2.164629268s)
--- PASS: TestMissingContainerUpgrade (170.35s)

                                                
                                    
x
+
TestPause/serial/Start (82.62s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-752076 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-752076 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m22.616196251s)
--- PASS: TestPause/serial/Start (82.62s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.99s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-752076 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-752076 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.970914166s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.99s)

                                                
                                    
x
+
TestPause/serial/Pause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-752076 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-752076 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-752076 --output=json --layout=cluster: exit status 2 (379.765233ms)

                                                
                                                
-- stdout --
	{"Name":"pause-752076","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-752076","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-752076 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.87s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-752076 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.87s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.31s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-752076 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-752076 --alsologtostderr -v=5: (2.311903282s)
--- PASS: TestPause/serial/DeletePaused (2.31s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-752076
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-752076: exit status 1 (20.870001ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-752076: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (111.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.804331414 start -p stopped-upgrade-811888 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.804331414 start -p stopped-upgrade-811888 --memory=2200 --vm-driver=docker  --container-runtime=docker: (59.186128588s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.804331414 -p stopped-upgrade-811888 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.804331414 -p stopped-upgrade-811888 stop: (11.755128222s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-811888 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0915 07:24:34.146663    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-811888 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (41.039637364s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (111.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-811888
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-811888: (2.574496358s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-975300 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-975300 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (97.242091ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-975300] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-2359/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2359/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-975300 --driver=docker  --container-runtime=docker
E0915 07:25:15.108028    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-975300 --driver=docker  --container-runtime=docker: (37.355686735s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-975300 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-975300 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-975300 --no-kubernetes --driver=docker  --container-runtime=docker: (18.457375035s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-975300 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-975300 status -o json: exit status 2 (361.392023ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-975300","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-975300
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-975300: (1.884697933s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-975300 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-975300 --no-kubernetes --driver=docker  --container-runtime=docker: (8.533503084s)
--- PASS: TestNoKubernetes/serial/Start (8.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-975300 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-975300 "sudo systemctl is-active --quiet service kubelet": exit status 1 (356.626877ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-975300
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-975300: (1.230340515s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-975300 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-975300 --driver=docker  --container-runtime=docker: (8.780841498s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-975300 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-975300 "sudo systemctl is-active --quiet service kubelet": exit status 1 (423.356957ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (134.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-635115 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0915 07:27:52.733888    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:28:39.548623    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:28:53.167788    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:29:20.871638    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-635115 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m14.730725951s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (134.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-635115 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6e160471-df3d-49c2-a4e9-72d485e22a41] Pending
helpers_test.go:344: "busybox" [6e160471-df3d-49c2-a4e9-72d485e22a41] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6e160471-df3d-49c2-a4e9-72d485e22a41] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.002941914s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-635115 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-635115 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-635115 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.057621139s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-635115 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-635115 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-635115 --alsologtostderr -v=3: (11.695710156s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-635115 -n old-k8s-version-635115
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-635115 -n old-k8s-version-635115: exit status 7 (197.866935ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-635115 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (145.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-635115 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-635115 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m24.917380022s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-635115 -n old-k8s-version-635115
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (145.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (62.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-305461 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-305461 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m2.032387541s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (62.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-305461 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [60f6d863-6559-421c-923b-966046b671d9] Pending
helpers_test.go:344: "busybox" [60f6d863-6559-421c-923b-966046b671d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [60f6d863-6559-421c-923b-966046b671d9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004074917s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-305461 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-305461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-305461 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-305461 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-305461 --alsologtostderr -v=3: (11.424372124s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-305461 -n no-preload-305461
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-305461 -n no-preload-305461: exit status 7 (78.079847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-305461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-305461 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-305461 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m26.859438268s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-305461 -n no-preload-305461
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bw74v" [74186f4e-a5fd-4e59-bed0-630eea78742e] Running
E0915 07:32:52.732827    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003686252s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bw74v" [74186f4e-a5fd-4e59-bed0-630eea78742e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005082109s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-635115 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-635115 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-635115 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-635115 -n old-k8s-version-635115
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-635115 -n old-k8s-version-635115: exit status 2 (319.152117ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-635115 -n old-k8s-version-635115
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-635115 -n old-k8s-version-635115: exit status 2 (337.757789ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-635115 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-635115 -n old-k8s-version-635115
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-635115 -n old-k8s-version-635115
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (76.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-649973 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0915 07:33:39.548889    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:33:53.167728    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-649973 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m16.702488962s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (76.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-649973 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0e45a4a4-a321-49d3-94db-ea0636b6325e] Pending
helpers_test.go:344: "busybox" [0e45a4a4-a321-49d3-94db-ea0636b6325e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0e45a4a4-a321-49d3-94db-ea0636b6325e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003809038s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-649973 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-649973 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-649973 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.0258769s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-649973 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-649973 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-649973 --alsologtostderr -v=3: (10.96869552s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-649973 -n embed-certs-649973
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-649973 -n embed-certs-649973: exit status 7 (69.961514ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-649973 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (289.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-649973 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0915 07:35:04.542140    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:35:04.548488    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:35:04.559834    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:35:04.581170    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:35:04.622462    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:35:04.703866    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:35:04.865236    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:35:05.186602    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:35:05.828547    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:35:07.110654    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:35:09.672159    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:35:14.794391    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:35:25.036207    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:35:45.518536    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:36:26.480051    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-649973 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m48.790367578s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-649973 -n embed-certs-649973
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (289.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-r2xhl" [37563831-0d47-4324-bb75-71919d771b89] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004163389s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-r2xhl" [37563831-0d47-4324-bb75-71919d771b89] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004667061s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-305461 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-305461 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-305461 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-305461 -n no-preload-305461
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-305461 -n no-preload-305461: exit status 2 (342.682183ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-305461 -n no-preload-305461
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-305461 -n no-preload-305461: exit status 2 (375.449802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-305461 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-305461 -n no-preload-305461
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-305461 -n no-preload-305461
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-471448 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0915 07:37:48.401420    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:52.732573    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-471448 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m8.749489008s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-471448 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [13f005d3-1be2-4a27-80ea-6f54389a7fd1] Pending
helpers_test.go:344: "busybox" [13f005d3-1be2-4a27-80ea-6f54389a7fd1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [13f005d3-1be2-4a27-80ea-6f54389a7fd1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005050028s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-471448 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-471448 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-471448 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.006706179s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-471448 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-471448 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-471448 --alsologtostderr -v=3: (10.800745938s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-471448 -n default-k8s-diff-port-471448
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-471448 -n default-k8s-diff-port-471448: exit status 7 (87.580513ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-471448 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-471448 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0915 07:38:22.661083    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:38:39.548949    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:38:53.167876    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-471448 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m26.91086243s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-471448 -n default-k8s-diff-port-471448
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-txrnw" [5ef7fdef-0f78-4e33-8ce2-78cdf12c76e8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003360548s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-txrnw" [5ef7fdef-0f78-4e33-8ce2-78cdf12c76e8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003924559s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-649973 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-649973 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-649973 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-649973 -n embed-certs-649973
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-649973 -n embed-certs-649973: exit status 2 (319.305032ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-649973 -n embed-certs-649973
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-649973 -n embed-certs-649973: exit status 2 (343.943134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-649973 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-649973 -n embed-certs-649973
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-649973 -n embed-certs-649973
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-221573 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0915 07:40:04.542748    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:40:16.233107    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:40:32.242917    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-221573 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (40.196657947s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-221573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-221573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.356776333s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-221573 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-221573 --alsologtostderr -v=3: (10.140624719s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-221573 -n newest-cni-221573
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-221573 -n newest-cni-221573: exit status 7 (71.74789ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-221573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-221573 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-221573 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (18.036989022s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-221573 -n newest-cni-221573
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-221573 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-221573 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-221573 -n newest-cni-221573
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-221573 -n newest-cni-221573: exit status 2 (338.227401ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-221573 -n newest-cni-221573
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-221573 -n newest-cni-221573: exit status 2 (343.506444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-221573 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-221573 -n newest-cni-221573
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-221573 -n newest-cni-221573
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (72.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0915 07:41:39.343123    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/no-preload-305461/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:41:39.349464    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/no-preload-305461/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:41:39.360803    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/no-preload-305461/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:41:39.382159    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/no-preload-305461/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:41:39.423479    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/no-preload-305461/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:41:39.505360    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/no-preload-305461/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:41:39.666939    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/no-preload-305461/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:41:39.988737    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/no-preload-305461/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:41:40.630850    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/no-preload-305461/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:41:41.912235    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/no-preload-305461/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:41:44.474400    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/no-preload-305461/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:41:49.595984    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/no-preload-305461/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:41:59.839213    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/no-preload-305461/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:42:20.320557    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/no-preload-305461/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m12.590732417s)
--- PASS: TestNetworkPlugins/group/auto/Start (72.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-621015 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-621015 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sglgf" [a4d63046-dbf3-4d8b-8326-53455aa74c62] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sglgf" [a4d63046-dbf3-4d8b-8326-53455aa74c62] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003844158s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-621015 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fnmkb" [1db8e27f-3e32-4191-a369-b7a1493ffd91] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005030104s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fnmkb" [1db8e27f-3e32-4191-a369-b7a1493ffd91] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004557582s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-471448 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m19.288381492s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-471448 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-471448 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-471448 -n default-k8s-diff-port-471448
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-471448 -n default-k8s-diff-port-471448: exit status 2 (427.774181ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-471448 -n default-k8s-diff-port-471448
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-471448 -n default-k8s-diff-port-471448: exit status 2 (404.215114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-471448 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-471448 -n default-k8s-diff-port-471448
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-471448 -n default-k8s-diff-port-471448
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.78s)
E0915 07:49:54.618283    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/kindnet-621015/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0915 07:43:39.548626    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:43:53.168104    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m20.922963965s)
--- PASS: TestNetworkPlugins/group/calico/Start (80.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6knzn" [52c1c972-72c9-4dfe-81c4-53ada3ea1077] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003683517s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-621015 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-621015 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-krth2" [6c85eff1-7f3b-4e80-9e19-fe87f9ed9ecb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0915 07:44:23.203552    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/no-preload-305461/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-krth2" [6c85eff1-7f3b-4e80-9e19-fe87f9ed9ecb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005406587s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-fm62c" [6e6e55e8-e345-468c-b902-83cfc214dc5c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00539187s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-621015 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-621015 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fm672" [67eac836-d5cf-4d38-a887-394c1a389535] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fm672" [67eac836-d5cf-4d38-a887-394c1a389535] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.009574603s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-621015 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-621015 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0915 07:45:04.542535    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/old-k8s-version-635115/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m0.10988899s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (56.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (56.93359212s)
--- PASS: TestNetworkPlugins/group/false/Start (56.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-621015 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-621015 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h5wpb" [bc00009d-9b62-4e86-af37-11836a41ae64] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h5wpb" [bc00009d-9b62-4e86-af37-11836a41ae64] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005312725s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-621015 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-621015 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zlvt8" [24f63fb3-7b21-41d6-a490-6293a10422ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zlvt8" [24f63fb3-7b21-41d6-a490-6293a10422ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.008841314s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-621015 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-621015 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (59.315310945s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0915 07:47:07.045443    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/no-preload-305461/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:23.708062    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/auto-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:23.714590    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/auto-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:23.726201    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/auto-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:23.747611    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/auto-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:23.789410    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/auto-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:23.870655    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/auto-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:24.033004    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/auto-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:24.354985    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/auto-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:24.997127    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/auto-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:26.279293    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/auto-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:28.840628    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/auto-621015/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m1.624069484s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-621015 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-621015 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dl5gc" [64a93720-0522-4275-9fd4-fafb29e0ca2d] Pending
E0915 07:47:33.962168    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/auto-621015/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-dl5gc" [64a93720-0522-4275-9fd4-fafb29e0ca2d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dl5gc" [64a93720-0522-4275-9fd4-fafb29e0ca2d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005451115s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-lqp5k" [73ea4ead-b781-4698-a73d-f5987865bc2d] Running
E0915 07:47:44.203956    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/auto-621015/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004430225s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-621015 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-621015 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-621015 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6dsvg" [127ab475-97d5-4e1d-834d-0e9e32aefd38] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0915 07:47:52.733394    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/functional-175112/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-6dsvg" [127ab475-97d5-4e1d-834d-0e9e32aefd38] Running
E0915 07:47:55.580837    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/default-k8s-diff-port-471448/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:55.587564    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/default-k8s-diff-port-471448/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:55.598905    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/default-k8s-diff-port-471448/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:55.620261    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/default-k8s-diff-port-471448/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:55.662256    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/default-k8s-diff-port-471448/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:55.743630    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/default-k8s-diff-port-471448/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:55.905732    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/default-k8s-diff-port-471448/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:56.227877    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/default-k8s-diff-port-471448/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:56.870151    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/default-k8s-diff-port-471448/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:47:58.151725    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/default-k8s-diff-port-471448/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.007246175s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-621015 exec deployment/netcat -- nslookup kubernetes.default
E0915 07:48:00.713952    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/default-k8s-diff-port-471448/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0915 07:48:16.078062    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/default-k8s-diff-port-471448/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m22.373921419s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (51.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0915 07:48:36.560207    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/default-k8s-diff-port-471448/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:48:39.549439    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/addons-837740/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:48:45.647065    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/auto-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:48:53.167925    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/skaffold-564371/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:13.640700    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/kindnet-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:13.647109    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/kindnet-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:13.658626    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/kindnet-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:13.680217    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/kindnet-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:13.721724    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/kindnet-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:13.803226    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/kindnet-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:13.964903    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/kindnet-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:14.287468    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/kindnet-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:14.929698    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/kindnet-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:16.211650    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/kindnet-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:17.522653    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/default-k8s-diff-port-471448/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-621015 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (51.339551774s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (51.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-621015 "pgrep -a kubelet"
E0915 07:49:18.772983    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/kindnet-621015/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-621015 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9gw78" [a45ef15f-74d5-431c-b2ac-fbf97e6815fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9gw78" [a45ef15f-74d5-431c-b2ac-fbf97e6815fa] Running
E0915 07:49:23.517074    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/calico-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:23.523490    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/calico-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:23.534877    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/calico-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:23.556319    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/calico-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:23.597861    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/calico-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:23.679762    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/calico-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:23.841097    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/calico-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:23.894606    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/kindnet-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:24.162436    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/calico-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:24.804195    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/calico-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:26.085853    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/calico-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:28.648556    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/calico-621015/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.003696969s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-621015 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-621015 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-621015 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xwff9" [bd0d2681-6ba5-49bd-b230-bf9abb59c353] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0915 07:49:33.770458    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/calico-621015/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:49:34.136181    7668 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/kindnet-621015/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-xwff9" [bd0d2681-6ba5-49bd-b230-bf9abb59c353] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004810217s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-621015 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-621015 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.30s)

                                                
                                    

Test skip (24/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-771311 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-771311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-771311
--- SKIP: TestDownloadOnlyKic (0.53s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-217917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-217917
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-621015 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-621015

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-621015

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-621015

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-621015

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-621015

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-621015

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-621015

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-621015

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-621015

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-621015

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-621015

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-621015" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-621015" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-621015" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-621015" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-621015" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-621015" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-621015" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-621015" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-621015

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-621015

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-621015" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-621015" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-621015

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-621015

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-621015" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-621015" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-621015" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-621015" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-621015" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19644-2359/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:25:39 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-975300
contexts:
- context:
cluster: NoKubernetes-975300
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:25:39 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: NoKubernetes-975300
name: NoKubernetes-975300
current-context: ""
kind: Config
preferences: {}
users:
- name: NoKubernetes-975300
user:
client-certificate: /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/NoKubernetes-975300/client.crt
client-key: /home/jenkins/minikube-integration/19644-2359/.minikube/profiles/NoKubernetes-975300/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-621015

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-621015" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-621015"

                                                
                                                
----------------------- debugLogs end: cilium-621015 [took: 3.608759942s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-621015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-621015
--- SKIP: TestNetworkPlugins/group/cilium (3.75s)

                                                
                                    
Copied to clipboard